News for machine learning and big data from AWS

News for machine learning and big data from AWS



During the AWS re:Invent conference, Amazon Web Services announced eight new features for the SageMaker end-to-end machine learning service. This tool is used by developers, data scientists, and business analysts to quickly and easily build, train, and deploy machine learning models using fully managed infrastructure, tools, and workflows.

Announcement Includes New Features management tools that provide visibility into model performance throughout the lifecycle; Studio Notebook enables customers to inspect and resolve data quality issues in just a few clicks, facilitates real-time collaboration between data science teams, and accelerates the transition from experimentation to production by converting notebook code into automated processes.

Finally, functions have also been introduced to automate model validation and simplify working with geospatial data.



Bratin Saha, Vice President of Artificial Intelligence and Machine Learning at AWS. Bratin Saha, Vice President of Artificial Intelligence and Machine Learning at AWS, said, “The new Amazon SageMaker features announced today make it even easier for teams to accelerate the end-to-end development and deployment of ML models.”

Databases and Analytics

SageMaker isn't the only company product to receive major updates: AWS announced five major additions to the AWS for data product portfolio that make it faster and simple for customers to manage and analyze data .

New functions are added to Amazon DocumentDB (compatible with MongoDB), OpenSearch Service and Athena; make it easy to run high-performance analytics and database workloads at scale.

Additionally, AWS announced new features for AWS Glue to automatically manage data quality between data lakes and pipelines. Finally, Amazon Redshift now supports a high availability configuration that spans multiple AWS Availability Zones.

Today's announcement helps customers get the most out of their data on AWS by empowering them to access the right tools for the their data-driven workloads, operate at scale, and increase availability.