AI in Practice

Alejandro (Alex) Jaimes

Posted: December 5, 20175 min read
<- Back to Blog Home

Share

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!Sign up

This is the final installment in a three-part series on artificial intelligence by DigitalOcean’s Head of R&D, Alejandro (Alex) Jaimes. Read the first post about the state of AI, and the second installment about how data and models feed computing.

So what does AI as a service mean for hobbyists, professional developers, engineering teams, the open source community, and companies today?

Starting an AI (or machine learning) project can be a daunting task at any level, and the steps should be different depending on the context. It’s important to note that sophisticated algorithms are not a requirement for AI and more often than not solutions may be simple. Even the most basic machine learning algorithm can do a decent job for some problems and once a process is set up, more sophisticated iterations are possible.

An alternative is starting with sophisticated algorithms—as long as there’s a good understanding of what those algorithms do and it’s “easy” to get them up and running. You don’t want to start your first iteration setting a large number of parameters you don’t understand.

There are some exceptions, and arguably, choices that depend on many factors, including level of expertise, but in general, it’s feasible to start small, build, and iterate quickly—you want to build an initial solution that demonstrates value. Even if it’s imperfect, setting up a process, and obtaining data, gets you off the ground. It’s imperative, however, to ask the right questions, focus on the solution, and the needs of who will be using whatever you build, and be resourceful and creative in combining data, models, and open source frameworks. Here’s how that applies to different players in the tech space:

  • Hobbyists have the most flexibility and can perhaps dream up the wildest ideas, albeit with very limited resources. In many ways, this puts them in the best position to explore; a perfect scenario for an iterative approach, focused initially on a proof of concept, starting with simpler algorithms and existing models and datasets. Start small and experiment—a lot. There are a ton of open source tools and datasets for machine learning. Many city governments, for example (NYC and SF are prime examples) have open data initiatives that can be leveraged.
  • Professional developers and engineering teams should focus on solving very specific problems. In many “first” cases, these could evolve around cost saving, speed, efficiency, or specific product features. The “how”, however, can follow the process outlined above for hobbyists. To start, treat the project like any other, by figuring out what is needed in terms of data and other resources, defining clear metrics, working closely with your product team to ask the right questions and focusing strongly on how the solution will be used- that’s the most critical issue because it will determine what algorithms and data are required. In many cases, answers to the questions might point to simple solutions that may not initially need AI, but that will enable it later. A change in an interface, for example, can significantly impact what users do and that could make the AI problem you are trying to solve a lot easier.
  • The open source community has never played a more critical role, and there’s no doubt that’s one of the reasons AI is having such an impact. Important initiatives towards the future include working on tools for cleaning, processing, and handling data, as well as tools for exchanging models and repurposing them, as well as packaging task-specific models for specific application domains so they can be easily implemented as services.
  • Companies need to focus on processes to enable access to data, constant updating of models, and experimentation. The field is evolving quickly, so making AI part of the cultural fabric of the company is what’s really most critical. Algorithms will change, hardware will evolve, but the processes that enable AI have a clear path. In addition to data and experimentation, the focus should be, on one hand, on improving productivity and using AI as an enabler, and on the other hand, in having a workforce that evolves with it. That requires a strong Human-Centered perspective and a strategy that helps employees be more efficient and focused on customer needs. Internally, this means empowering developers and engineers to have flexibility in choosing the tools they use, and setting up programs to keep them constantly in the loop on product and user needs, not in AI or other “silos”.

The Future

The field is evolving extremely quickly and one could argue that most of the research being published consists mainly of experimentation, on either applying known deep learning architectures to new problems or tweaking parameters. It’s clear, however, that efforts—and progress—are being made n areas such as transfer learning, reinforcement learning, and unsupervised learning, among others. In terms of hardware, it’s too early to say, but it’s very positive to see new developments in the space.

Perhaps more important than advancements in algorithms, we can expect advances in how AI augments human abilities. There will be a much tighter integration between humans and machines than what computing has created thus far. For hobbyists, professional developers, engineering teams, the open source community and companies, this really translates to having a strong human-centered focus.

Conclusion

I’ve referred to AI throughout this series, but most of my examples relate to machine learning. One of the key differences between the two is that true AI applications will have an even stronger focus on user interaction and experience. At the end of the day, it’s the applications we build that will make a difference, AI or not. How “smart” the system is, or what algorithms it uses, won’t matter.

Try your hand at Machine Learning with the DigitalOcean Machine Learning One-Click application.

*Alejandro (Alex) Jaimes is Head of R&D at DigitalOcean. Alex enjoys scuba diving and started coding in Assembly when he was 12. In spite of his fear of heights, he’s climbed a peak or two, gone paragliding, and ridden a bull in a rodeo. He’s been a startup CTO and advisor, and has held leadership positions at Yahoo, Telefonica, IDIAP, FujiXerox, and IBM TJ Watson, among others. He holds a Ph.D. from Columbia University.

Learn more by visiting his personal website or LinkedIn profile. Find him on Twitter: @tinybigdata.*

Share

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!Sign up

Related Articles

How startups scale on DigitalOcean Kubernetes: Best Practices Part VI - Security
Engineering

How startups scale on DigitalOcean Kubernetes: Best Practices Part VI - Security

Introducing new GitHub Actions for App Platform
Engineering

Introducing new GitHub Actions for App Platform

How SMBs and startups scale on DigitalOcean Kubernetes: Best Practices Part V - Disaster Recovery
Engineering

How SMBs and startups scale on DigitalOcean Kubernetes: Best Practices Part V - Disaster Recovery