(Mostly Cloudy Photo)
Welcome to another morning edition of Mostly Cloudy, as we trudge forward at AWS re:Invent 2019 in Las Vegas! A lot happened yesterday in the desert, and if you didn’t catch it all before settling in for Wednesday morning’s partner keynote, read on.
State of the Cloud: AWS doubles down on tools for AI
AWS CEO Andy Jassy’s re:Invent keynotes are renowned for their length; three-hour manifestos on cloud computing interspersed with several 60-second covers of classic rock songs that attempt to set a theme for the next segment. Every minute of that yearly keynote is planned with exquisite detail, which is why it was quite notable that Jassy devoted an overwhelming portion of his keynote Tuesday morning to a single AWS service.
Amazon SageMaker is one of the cloud leader’s key tools for addressing the burgeoning world of applied artificial intelligence, which a large number of consumer and enterprise tech startups these days are touting as their killer advantage. SageMaker is designed to make it easier for data scientists to build machine-learning models on AWS, and Jassy announced no less than seven major improvements to the service Tuesday.
VentureBeat has a good overview of the specific announcements, which include a new integrated development environment for building machine-learning models as well as “Autopilot,” which takes data stored in AWS’s S3 storage service and automatically creates a machine-learning model based on that data. In total, the enhancements are some of the most notable improvements made to the machine-learning tool since it was introduced at re:Invent 2017.
The new features underscore a couple of telling points about AWS’s current AI strategy. It’s no secret that AWS has been a little, shall we say, jealous of the attention that has been paid to companies involved in core AI research over the past few years, such as cloud competitors Microsoft and (especially) Google.
But coming off Tuesday’s keynote, it seems like AWS has decided to play a different game. Amazon conducts plenty of its own core AI research aimed at optimizing its massive retail operation, but with these announcements, AWS appears to be aiming for a different part of the AI stack by creating a higher-level set of services that allow researchers working with core AI technology to easily deploy their work to the cloud.
A sorta-blurry overview of AWS’s machine-learning strategy. (AWS Photo)
SageMaker is shaking out as a platform-as-a-service play for artificial intelligence. In this analogy, AWS is somewhat agnostic when it comes to the underlying machine-learning models that govern AI research, such as Tensorflow, Pytorch, and MXNet, but is laser-focused on giving researchers using those frameworks an easy way to get their models onto cloud computing hardware.
Jassy took care to point out that 85 percent of Tensorflow workloads — an open-source technology developed by Google — are running on AWS already. This strategy makes sense: AWS seems unlikely to bend AI research frameworks in its direction at this point, but all those AI frameworks are useless without the computing power required to train machine-learning models and make inferences based on that training.
It’s generally accepted at this point that artificial intelligence is the future of computing, but it’s difficult to understand how many companies are actually making use of those technologies to improve their products and services. Obviously there is some demand: AWS wouldn’t spend nearly 45 minutes of valuable keynote airtime talking up AI tools if the company didn’t think that its customers were looking for help.
But just as very, very few enterprise computing consumers feel the need to design their own chips and servers to meet their needs, AWS seems to have concluded that most companies just want easy ways to train the massive amounts of data they have accumulated without hiring ridiculously expensive data scientists to develop proprietary models for training that data.
We’ll have to see what Amazon CTO Werner Vogels has in store for his Thursday morning keynote, which is always the techiest keynote of the week and might involve new work around artificial intelligence. But at this point, it seems like AWS is most concerned with providing tools for the data scientists and developers building the machine-learning models of the future to get their work into production, which is very much in keeping with its historical mission.
Disclosure: AWS paid for my accommodations in Las Vegas.
Yesterday at AWS re:Invent 2019
With Outposts, Local Zones, and Verizon, AWS looks beyond the cloud (Mostly Cloudy)
Only recently has AWS grudgingly accepted the arrival of the hybrid cloud strategy, but it has clearly on board at this point. AWS Outposts, one of the most significant announcements from last year, is now generally available, and two other interesting edge computing services were introduced this week.
Amazon Web Services introduces a second-generation Arm server processor (Mostly Cloudy)
This somehow feels like the most overlooked announcement of the day: AWS just announced a custom Arm processor that promises 40 percent better performance than current-generation Intel chips for 20 percent less money. I’ll have more details on its Graviton strategy for Mostly Cloudy subscribers later today.
Amazon S3 Access Points, Redshift updates as AWS aims to change the data lake game (ZDNet)
I had dinner last night with a couple of executives from data warehouse startup phenom Snowflake Computing, who appeared to be the major target of these enhancements to AWS’s Redshift data warehouse. (They were quite diplomatic.) S3 Access Points are an equally interesting move, and might help reduce the number of mistakes AWS customers make when configuring their S3 storage buckets that leave the contents exposed to the internet.
AWS marries Fargate to Kubernetes (Container Journal)
AWS dragged its heels on Kubernetes support, at least compared to its cloud provider counterparts, but it has done some very interesting things with containers over the past few years. Fargate allows customers to provision serverless containers, and its managed Kubernetes service now also works with the Fargate technology: Mostly Cloudy subscribers should stay tuned for more details on this new service after I meet with the AWS container team this afternoon.
AWS launches discounted spot capacity for its Fargate container platform (Techcrunch)
Customers interested in on-demand spot pricing for serverless containers can now take advantage of the same pricing model assigned to spot instances for basic computing services. One of the benefits of containerized applications is that they can be launched and shut down extremely quickly, and spot pricing might make a lot of sense for applications with short, spiky performance requirements.
AWS Launches Cassandra Service (Datanami)
Here’s another log for the “AWS is killing open source” fire: the company announced a managed version of the open-source Apache Cassandra database Tuesday. Datastax, a company that offers a managed Cassandra database that was eyeing an IPO at some point in the near future, might have a few things to think about.
And Now, A Word…
I’ll have one more morning update this week from Las Vegas for subscribers as we head into the slower portion of the re:Invent week, and a few other posts as described above. I’ll likely skip the regular Saturday update unless anything crazy happens; based on my conversations this week in Vegas, it seems safe to say that everybody in the cloud computing orbit is looking forward to the holiday break and recharging for the next year.
Around the Cloud
Intel in talks to buy Israeli AI chip co Habana Labs (Globes)
Intel bought an AI chip startup called Nervana three years ago for around $400 million, but it appears that the chip company is still looking for something that will help it compete against custom AI processors developed by cloud providers as well as GPUs from Nvidia. Intel could pay as much as $1 billion for Habana Labs, according to this report, but finding the right chip startup could make a huge difference: AWS cited its $350 million acquisition of Annapurna Labs in 2016 several times this week as a major factor behind its current success.
HPE Unveils ‘Revolutionary’ Edge To Cloud HPE GreenLake Service (CRN)
Um, sure? GreenLake is designed to let HPE customers assess the cost and performance of various workloads across different cloud providers as well as their own data centers, but there are dozens of companies promising to help cloud holdouts figure out their best path to the cloud.
Ampere preps an 80-core Arm processor for the cloud (Network World)
When AWS unveiled Graviton last year, it seemed like some of the wind might have been taken out of promising Arm server chip startup Ampere, but it is forging ahead with its road map. Other cloud providers, or even on-premises data center operators, could be interested in Ampere’s chip if it achieves a similar price/performance ratio as the second-generation Graviton chip.
Edging Toward Distributed HPC (The Next Platform)
The lackluster Monday Night Live keynote at re:Invent spent a fair amount of time talking about the cloud leader’s investments in high-performance computing, just as traditional HPC companies realize they have to take a page or two from the cloud-native computing model. David Womble of Oak Ridge National Laboratory, one of the biggest supercomputing customers in this market, has a few ideas about how this might play out.