An Introduction to the Internet of Things

by

“Smart” technology is quickly emerging in all areas of our lives. From smartphones to smart televisions, refrigerators, watches, and even dog collars, it seems like everything around us is being connected to the internet. This phenomenon is known as the Internet of Things (IoT). 

Textiles and Tech Styles – How Textiles Influence High Tech

by

It may seem strange to bring up textiles when discussing computer programming. However, my interest in the correlation between the two was piqued last week when my friend sent me a question currently circulating on the internet: Is it possible to knit DOOM? Thinking about this question led me to consider the immense influence that the textile industry has had on computer science and modern technology. 

Using Intents to Create Transitions in Android Applications

by

Creating a successful application isn’t just about ensuring that all of the components work; the layout and design of the application are also crucial. The design must be professional and engaging, and the layout should be easy for users to navigate. Design components, such as animations and navigation transitions, can also enhance the usability of the application.

An Introduction to GPT-3

by

When you think about the future of artificial intelligence (AI) technology, it’s likely that you don’t think of auto-completion. However, you probably should. In July 2020, OpenAI released a beta testing version of GPT-3, a new auto-completion program that could very likely define the next decade of AI programming. 

In this blog post, I will introduce you to OpenAI’s GPT-3 model, and present the strengths, limitations, and potential for this new technology. 

What is GPT-3?

Generative Pre-trained Transformer 3 (GPT-3) is AI technology developed by OpenAI, a company founded by Elon Musk and dedicated to AI investigation. It is the third model in OpenAI’s GPT series of autoregressive language tools. 

GPT-3 is a language model that uses deep learning to create human-like text. Like other language processing systems, GPT-3 predicts the probability of the sequence of words based on the given text and automatically provides the most likely answer. It is similar to the auto completion you see when you type something in the Google search bar or in the messaging application on your phone. 

How it Works

When GPT-3’s application programming interface (API) receives a small piece of text, it returns text based on the entry. The entry can be formulated as a phrase, a task, a question, or any kind of expression. 

GPT-3’s auto-completion success is based on the amount of data from which it is able to gather statistical information. GPT-3 has access to data from a wide range of sources, including data sets, common crawl, books, news, and internet web pages. Where GPT-2, GPT-3’s predecessor, had 1.5 billion parameters to analyze, GPT-3 has 175 billion parameters. To put it in perspective, the entirety of the English Wikipedia only makes up about 0.6 percent of GPT-3’s dataset. 

Defining Talent in UX Design

by

There have been countless books written on talent, we know talent when we see it, and we can sense talent in people around us. While most of us have a fundamental understanding of what the word “talent” means, most of us would have a hard time clearly defining it.

Optimizing User Flow Test Automation with QA IDs

by

User flow testing, also known as workflow testing, analyzes how an application is performing from the standpoint of the user. In this post, I am going to talk about some of the challenges with automating these types of tests and how we’ve addressed these challenges on several recent projects. 

Understanding PDF Generation with Headless Chrome

by

Headless browsers are currently gaining popularity as an efficient way to test web applications because they do not affect the user interface. In this post, I am going to discuss the benefits of Headless Chrome and two approaches for using Headless Chrome to automatically create PDF reports. 

The Uncertain Future of Moore’s Law

by

In 1965, Gordon Moore, CEO and co-founder of Intel, made a prediction that the number of transistors on an integrated circuit (the main component on a computer chip) would double every two years for at least the next decade. This prediction, known today as Moore’s Law, has continued to be fulfilled since 1965. While it is known as Moore’s Law, Gordon Moore’s prediction is not truly a law; rather, it is a trend that chipmakers around the world have been encouraged to match via technological advancements, research, and development.