Amazon Web Services Announces 13 New Machine Learning Services and Capabilities

0
692

New Amazon SageMaker capabilities make it easier for customers to build, train, and deploy models; SageMaker Ground Truth produces high-quality labeled training data, SageMaker RL delivers cloud’s first managed service for reinforcement learning algorithms and simulators, and a new machine learning marketplace offers more than 150 new models and algorithms to developers through Amazon SageMaker

AWS DeepRacer, a 1/18th scale autonomous racing car, which gets developers rolling with reinforcement learning

Amazon Elastic Inference reduces cost of machine learning predictions by 75%; TensorFlow enhancements and New Amazon EC2 P3dn instances drive faster machine learning training; new custom machine learning inference chip, AWS Inferentia, will drastically reduce costs

New AI services bring intelligence to every app and developer; Amazon Textract extracts text and data from virtually any scanned document; Amazon Personalize and Amazon Forecast bring the same technology used by Amazon.com to developers, with no machine learning experience required; Amazon Comprehend Medical provides natural language processing for medical information

Initial customers for these services include Adobe, Alfresco, BMW, Bristol Myers Squibb, Chick-fil-A, Expedia, Formula 1, Fred Hutch Cancer Institute, Intuit, Johnson & Johnson, Lion Foods, Major League Baseball, National Football League, Pinterest, Ralph Lauren, Roche Pharmaceuticals, Samsung, Shell, Siemens, Snap, Thompson Reuters, T-Mobile, Tyson Foods, Woolworths, Zendesk, Zocdoc and more

SEATTLE–Nov. 28, 2018– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced 13 new machine learning capabilities and services, across all layers in the machine learning stack, to help put machine learning in the hands of even more developers. AWS introduced new Amazon SageMaker features making it easier for developers to build, train, and deploy machine learning models – including low cost, automatic data labeling and reinforcement learning (RL). AWS revealed new services, framework enhancements, and a custom chip to speed up machine learning training and inference, while reducing cost. AWS announced new artificial intelligence (AI) services that can extract text from virtually any document, read medical information, and provide customized personalization, recommendations, and forecasts using the same technology used by Amazon.com. And, last but certainly not least, AWS will help developers get rolling with machine learning with AWS DeepRacer, a new 1/18th scale autonomous model race car for developers, driven by reinforcement learning.

These announcements continue the drum beat of machine learning innovation from AWS, which has launched more than 200 significant machine learning capabilities in the past 12 months. Customers using these new services and capabilities include Adobe, BMW, Cathay Pacific, Dow Jones, Expedia, Formula 1, GE Healthcare, HERE, Intuit, Johnson & Johnson, Kia Motors, Lionbridge, Major League Baseball, NASA JPL, Politico.eu, Ryanair, Shell, Tinder, United Nations, Vonage, the World Bank, and Zillow. To learn more about AWS’s new machine learning services, visit: https://aws.amazon.com/machine-learning.

“We want to help all of our customers embrace machine learning, no matter their size, budget, experience, or skill level,” said Swami Sivasubramanian, Vice President, Amazon Machine Learning. “Today’s announcements remove significant barriers to the successful adoption of machine learning, by reducing the cost of machine learning training and inference, introducing new SageMaker capabilities that make it easier for developers to build, train, and deploy machine learning models in the cloud and at the edge, and delivering new AI services based on our years of experience at Amazon.”

New infrastructure, a custom machine learning chip, and framework improvements for faster training and low-cost inference

Most machine learning models are trained by an algorithm that finds patterns in large amounts of data. The model can then make predictions on new data in a process called ‘inference’. Developers use machine learning frameworks to define these algorithms, train models, and infer predictions. Frameworks (such as TensorFlow, Apache MXNet, and PyTorch), allow developers to design and train sophisticated models, often using multiple GPUs to reduce training times. Most developers use more than one of these frameworks in their day-to-day work. Today, AWS announced significant improvements for developers building with all of these popular frameworks, by improving performance and reducing cost for both training and inference.

  • New Amazon Elastic Compute Cloud (EC2) GPU instances (available next week): With eight NVIDIA V100 GPUs, 32GB GPU memory, fast NVMe storage, 96 Intel Xeon Scalable processors vCPUs, and 100Gbps networking, the new P3dn.24xl instances are the most powerful machine learning training processors available in the cloud, allowing developers to train models with more data in less time.
  • AWS-Optimized TensorFlow framework (generally available today): When training with large amounts of data, developers who choose to use TensorFlow have found that it’s challenging to scale TensorFlow across many GPUs, which often results in low utilization of these GPUs and longer training times for large training jobs. AWS worked on this problem and has innovated on how to make TensorFlow scale across GPUs. By improving the way in which TensorFlow distributes training tasks across those GPUs, the new AWS-Optimized TensorFlow achieves close to linear scalability when training multiple types of neural networks (90 percent efficiency across 256 GPUs, compared to the prior norm of 65 percent). Using the new AWS-Optimized TensorFlow and P3dn instances, developers can now train the popular ResNet-50 model in only 14 minutes, the fastest time recorded, and 50 percent faster than the previous best time. And, these optimizations are generally applicable not just for computer vision models but also for a broader set of deep learning models.
  • Amazon Elastic Inference (generally available today): While training rightfully receives a lot of attention, inference actually accounts for the majority of the cost and complexity for running machine learning in production (for every dollar spent on training, nine are spent on inference). Amazon Elastic Inference allows developers to dramatically decrease inference costs with up to 75 percent savings when compared to the cost of using a dedicated GPU instance. Instead of running on a whole Amazon EC2 P2 or P3 instance with relatively low utilization, developers can run on a smaller, general-purpose Amazon EC2 instance and provision just the right amount of GPU performance from Amazon Elastic Inference. Starting at just 1 TFLOP, developers can elastically increase or decrease the amount of inference performance, and only pay for what they use. Elastic Inference supports all popular frameworks, and is integrated with Amazon SageMaker and the Amazon EC2 Deep Learning Amazon Machine Image (AMI). And, developers can start using Amazon Elastic Inference without making any changes to their existing models.
  • AWS Inferentia (available in 2019): For larger workloads that consume entire GPUs or require lower latency, AWS announced a high performance machine learning inference chip, custom designed by AWS. AWS Inferentia provides hundreds of teraflops per chip and thousands of teraflops per Amazon EC2 instance for multiple frameworks (including TensorFlow, Apache MXNet, and PyTorch), and multiple data types (including INT-8 and mixed precision FP-16 and bfloat16).

Autodesk is a leader in 3D design, engineering, and entertainment software who uses deep learning models for use cases ranging from exploring thousands of potential design alternatives, semantically searching designs, and streamlining the engineering construction processes to optimize rendering workflows. “Running efficient inference is one of the biggest challenges in machine learning today,” said Peter Jones, Head of AI Engineering for Autodesk Research. “Amazon Elastic Inference is the first capability of its kind we’ve found to help us eliminate excess costs that we incur today from idle GPU capacity. We estimate it will save us 75 percent in costs compared to running GPUs.”

EagleView, a property data analytics company, helps lower property-damage losses from natural disasters by decreasing the time it takes to assess damage so that homeowners can decide next steps much faster. Using aerial, drone, and satellite images, EagleView runs deep learning models on AWS to make quicker, more accurate assessments of property damage within 24 hours of a natural disaster. “Matching the accuracy of human adjusters in property assessments requires us to process massive amounts of data in the form of ultra-high resolution images covering the entire multi-dimensional space (spatial, spectral, and temporal) of a disaster-affected region,” explains Shay Strong, Director of Data Science and Machine Learning at EagleView. “Amazon Elastic Inference opens new doors that enables us to explore running workflows more cost effectively at scale.”

New Amazon SageMaker capabilities make it easier to build, train, and deploy machine learning; developers get hands on with AWS DeepRacer, a 1/18th scale autonomous race car driven by reinforcement learning

Amazon SageMaker is a fully managed service that removes the heavy lifting and guesswork from each step of the machine learning process. Amazon SageMaker makes it easier for developers to build, train, tune, and deploy machine learning models. Today, AWS announced a number of new capabilities for Amazon SageMaker.

  • Amazon SageMaker Ground Truth (generally available today): The journey to build machine learning models requires developers to prepare their datasets for training their ML models. Before developers can select their algorithms, build their models, and deploy them to make predictions, human annotators manually review thousands of examples and add the labels required to train machine learning models. This process is time consuming and expensive. Amazon SageMaker Ground Truth makes it much easier for developers to label their data using human annotators through Mechanical Turk, third party vendors, or their own employees. Amazon SageMaker Ground Truth learns from these annotations in real time and can automatically apply labels to much of the remaining dataset, reducing the need for human review. Amazon SageMaker Ground Truth creates highly accurate training data sets, saves time and complexity, and reduces costs by up to up to 70 percent when compared to human annotation.
  • AWS Marketplace for Machine Learning (generally available today): Machine learning is moving quickly, with new models and algorithms from academia and industry appearing virtually every week. Amazon SageMaker includes some of the most popular models and algorithms built-in, but to make sure developers continue to have access to the broadest set of capabilities, the new AWS Marketplace for Machine Learning includes over 150 algorithms and models (with more coming every day) that can be deployed directly to Amazon SageMaker. Developers can start using these immediately from SageMaker. Adding a listing to the Marketplace is completely self-service for developers who want to sell through AWS Marketplace.
  • Amazon SageMaker RL (generally available today): In machine learning circles, there is a lot of buzz about reinforcement learning because it’s an exciting technology with a ton of potential. Reinforcement learning trains models, without large amounts of training data, and it’s broadly useful when the reward function of a desired outcome is known but the path to achieving it is not and requires a lot of iteration to discover. Healthcare treatments, optimizing manufacturing supply chains, and solving gaming challenges are a few of the areas that reinforcement learning can help address. However, reinforcement learning has a steep learning curve and many moving parts, which effectively puts it out of the reach of all but the most well-funded and technical organizations. Amazon SageMaker RL, the cloud’s first managed reinforcement learning service, allows any developer to build, train, and deploy with reinforcement learning through managed reinforcement learning algorithms, support for multiple frameworks (including Intel Reinforcement Learning Coach.and Ray RL), multiple simulation environments (including SimuLink and MatLab), and integration with AWS RoboMaker, AWS’s new robotics service, which provides a simulation platform that integrates well with SageMaker RL.
  • AWS DeepRacer (available for pre-order today): In just a few lines of code, developers can start learning about reinforcement learning with AWS DeepRacer, a 1/18th scale fully autonomous race car. The car (with all-wheel drive, monster truck tires, an HD video camera, and on-board compute) is driven using reinforcement learning models trained using Amazon SageMaker. Developers can put their skills to the test and race their cars and models against other developers for prizes and glory in the DeepRacer League, the world’s first global autonomous racing league, open to everyone.
  • Amazon SageMaker Neo (generally available today): The new deep learning model compiler lets customers train models once, and run them anywhere with up to 2X improvement in performance. Applications running on connected devices at the edge are particularly sensitive to performance of machine learning models. They require low latency decisions, and are often deployed across a broad number of different hardware platforms. Amazon SageMaker Neo compiles models for specific hardware platforms, optimizing their performance automatically, allowing them to run at up to twice the performance, without any loss in accuracy. As a result, developers no longer need to spend time hand tuning their trained models for each and every hardware platform (saving time and expense). SageMaker Neo supports hardware platforms from NVIDIA, Intel, Xilinx, Cadence, and Arm, and popular frameworks such as TensorFlow, Apache MXNet, and PyTorch. AWS will also make Neo available as an open source project.

Tyson Foods is one of the world’s largest food companies and a recognized leader in protein. “We are building a computer vision system for our chicken processing facilities and we need highly accurate labeled training datasets to train these systems,” said Chad Wahlquist, Director of Emerging Technology for Tyson Foods. “When we first tried to setup our own labeling solution, it required a large amount of compute and a Frankenstein of open source solutions – even before creating the user interface for data labeling. With Amazon SageMaker Ground Truth, we were able to use the readymade template for bounding boxes and got a labeling job running in just a few clicks, quickly and easily. Amazon SageMaker Ground Truth also enables us to securely bring our own workers to label the data, which is an essential requirement for our business. We are looking forward to using Amazon SageMaker Ground Truth across our business.”

Dubbed “America’s Un-carrier,” T-Mobile is a leading wireless services, products, and service innovation provider. “The AI at T-Mobile team is integrating AI and machine learning into the systems at our customer care centers, enabling our team of experts to serve customers with greater speed and accuracy through Natural Language Understanding models that show them relevant, contextual customer information in real-time,” said Matthew Davis, Vice President of IT Development for T-Mobile. “Labeling data has been foundational to creating high performing models, but is also a monotonous task for our data scientists and software engineers. Amazon SageMaker Ground Truth makes the data labeling process easy, efficient, and accessible, freeing up time for them to focus on what they love – building products that deliver the best experiences for our customers and care representatives.”

Chick-fil-A, Inc. is a family owned and privately held restaurant company that is known for its original chicken sandwich and which serves freshly prepared food in more than 2,300 restaurants in 47 states and Washington, DC. “Food safety is of critical importance in our business. Our early efforts with computer vision and machine learning show promise in improving operations,” said Jay Duff, Principal Team Lead for Chick-fil-A. “Amazon SageMaker and GroundTruth helped us speed up the development of new models and evaluations by making it easier to label and verify new training sets, re-train models, and then iterate on more complex data. Additionally, the workforce management features gave us faster turnaround on manual tasks while reducing administrative toil.”

Arm technology is at the heart of a computing and connectivity revolution that is transforming the way people live and businesses operate. “Arm’s vision of a trillion connected devices by 2035 is driven by the additional consumer value derived from innovations like machine learning,” said Jem Davies, fellow, General Manager and Vice President for the Machine Learning Group at Arm. “The combination of Amazon SageMaker Neo and the Arm NN SDK will help developers optimize machine learning models to run efficiently on a wide variety of connected edge devices.”

Cadence enables electronic systems and semiconductor companies to create the innovative end products that are transforming the way people live, work, and play. Cadence software, hardware and semiconductor IP are used by customers to deliver products to market faster. “Cadence(r) Tensilica(r) processors are optimized for on-device machine learning applications spanning from autonomous driving cars to speech processing to robotics,” said Babu Mandava, Senior Vice President and General Manager of the IP Group at Cadence Design Systems. “Amazon SageMaker Neo simplifies the deployment of optimized models from cloud to the edge. We are excited to be driving a seamless integration of Amazon SageMaker Neo with our Tensilica processor family and development environment to help developers optimize machine learning models for Tensilica-powered edge devices.”

GE Healthcare is a leading provider of medical imaging, monitoring, biomanufacturing, and cell and gene therapy technologies that enables precision health in diagnostics, therapeutics and monitoring through intelligent devices, data analytics, applications and services. “GE Healthcare is transforming healthcare by empowering providers to deliver better outcomes,” said Keith Bigelow, Senior Vice President of Edison Portfolio Strategy, GE Healthcare. “We train computer vision models with Amazon SageMaker that are then deployed in our MRI and X-Ray devices. By applying reinforcement learning techniques, we are able to reduce the size of our trained models while achieving the right balance between network compression and model accuracy. Amazon SageMaker RL enabled us to get from idea to implementation in less than four weeks by removing the complexities of running reinforcement learning workloads.”

“Reinforcement Learning is enabling innovation in machine learning and robotics,” said Brad Porter, Vice President and Distinguished Engineer of Amazon Robotics. “We’re excited Amazon SageMaker is making it easier to try reinforcement learning techniques with real-world applications, and we’re already experimenting with ways to use it for robotic applications. For instance, earlier this year we showed a robot that was able to play beer pong using some of these techniques and we’re excited to continue to explore these opportunities in partnership collaboration with AWS.”

New AI services bring intelligence to all apps, no machine learning experience required

Many developers want to be able to add intelligent features to their applications without requiring any machine learning experience. Building on existing computer vision, speech, language, and chatbot services, AWS announced a significant expansion of AI services.

  • Amazon Textract (available in preview today): Many companies today extract data from documents and forms through manual data entry which is slow and expensive, or using simple optical character recognition (OCR) software, which is often inaccurate and typically produces output that requires extensive post-processing to put the extracted content in a format that is usable by a developer’s application. Amazon Textract uses machine learning to instantly read virtually any type of document to accurately extract text and data without the need for any manual review or custom code. Amazon Textract allows developers to quickly automate document workflows, processing millions of document pages in a few hours.

About Amazon Web Services

For over 12 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 125 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 57 Availability Zones (AZs) within 19 geographic regions around the world, spanning the US, Australia, Brazil, Canada, China, France, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world—including the fastest-growing startups, largest enterprises, and leading government agencies—to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit amazon.com/about and follow @AmazonNews.