In early May, a small group of Grio engineers (myself included) travelled to Minneapolis for RailsConf. Although I’ve presented at conferences in the past, this was my first time attending a tech conference exclusively to learn, and I had a great time. In this post, I’ll give a quick recap of my favorite talks, and pass on a few things that I learned from the experience overall.
The Grio design team has grown in both size and expertise over the past few years, and as our Design Manager, I spend a fair amount of time thinking about how we can improve our tools and processes to continue supporting that growth. One step I’ve taken has been to formally integrate a design thinking approach into all of our projects. In this post, I’ll provide a brief introduction to design thinking, explain how it works in practice, and dig into the critical role of empathy in good design.
You’ve probably heard about blockchain mostly, if not exclusively, in the context of cryptocurrency (e.g., Bitcoin). But blockchain technology also has exciting applications in industries beyond finance. In this post, I’ll talk about two areas where blockchain is just beginning to be applied — food supply chain tracking, and international aid.
In this post, I propose a pattern for allowing apps to transmit data through unstable network connections. I’ll be taking advantage of the modern architecture present on the iOS Platform, as well as the popular AFNetworking (or AlamoFire). To follow along, you’ll need some knowledge of iOS Native Development, NSOperation API, CoreData, and Networking.
In Part 1 of this series, we dug into the technical side of AI music composition, including neural network and algorithmic methods. Now, I’d like to step back and focus on a different set of questions:
- Can AI-composed music be good, i.e., will BeyoncAI ever rival the real Beyoncé?
- How might AI change the music industry?
- Who owns the rights to AI-composed music?
Humans have been making music for as long as we can remember — but the tools and methods we use to do so have evolved significantly, from simple wooden drums, to wind and string instruments, to electronic synthesizers. And now, with projects like Google’s Magenta and Sony’s FlowMachines, we’re beginning to see the emergence of music that’s not just played by computers, but actually composed by artificial intelligence.
As a designer, I’ve thought a lot about what makes a product “user friendly.” I know that certain combinations of color, typography, layout, and interaction feel more relevant and intuitive than others — but why? What are the underlying factors that make one interface meaningful and easy to navigate, while another is opaque and confusing?
Introduction to Part 1
This post is the first in a four-part series on creating Android custom views, and covers a few introductory topics, including: how to decide if a custom view is the best solution to your problem, the three basic methods for creating a custom view, and the required constructors you’ll need to implement when subclassing the View class.
“WHAT THE HECK?! HOW CAN I UNLOCK MY PHONE WITH MY FACE?!”
Those were the words that came out of my mouth in October of 2017, as I pored over the user manual for my new iPhone X. It wasn’t all hyperbole, either — I really wanted to know, and I ended up dedicating quite a bit of time to learning about the science behind Apple’s new facial recognition technology. In the end, the answer to my question boiled down to two words — machine learning.
Last Christmas, I had a minor family tech crisis (we’ve all had those, right)? I was visiting my parents, and my mom asked me to AirDrop some photos from my iPhone to hers. I’ve AirDropped photos probably a hundred times, but this time, for some reason, it didn’t work. My phone showed the photos as “sent”, but they weren’t appearing on my mom’s phone.