Category: AWS – June 2023

Amazon SageMaker Canvas

Amazon SageMaker Canvas: What’s New

Amazon SageMaker Canvas Operationalize ML Models in Production

Amazon SageMaker Canvas is a new no-code machine learning platform that allows business analysts to generate accurate ML predictions without writing any code or requiring any ML expertise. It was launched at the AWS re:Invent 2021 conference and is built on the capabilities of Amazon SageMaker, the comprehensive ML service from AWS.

What is Amazon SageMaker Canvas?

Amazon SageMaker Canvas is a visual, point-and-click interface that enables users to access ready-to-use models or create custom models for a variety of use cases, such as:

  • Detect sentiment in free-form text
  • Extract information from documents
  • Identify objects and text in images
  • Predict customer churn
  • Plan inventory efficiently
  • Optimize price and revenue
  • Improve on-time deliveries
  • Classify text or images based on custom categories

Users can import data from disparate sources, select values they want to predict, automatically prepare and explore data, and create an ML model with a few clicks. They can also run what-if analysis and generate single or bulk predictions with the model. Additionally, they can collaborate with data scientists by sharing, reviewing, and updating ML models across tools. Users can also import ML models from anywhere and generate predictions directly in Amazon SageMaker Canvas.

What is Operationalize ML Models in Production?

Operationalize ML Models in Production is a new feature of Amazon SageMaker Canvas that allows users to easily deploy their ML models to production environments and monitor their performance. Users can choose from different deployment options, such as:

  • Real-time endpoints: Users can create scalable and secure endpoints that can serve real-time predictions from their models. Users can also configure auto-scaling policies, encryption settings, access control policies, and logging options for their endpoints.
  • Batch transformations: Users can run batch predictions on large datasets using their models. Users can specify the input and output locations, the number of parallel requests, and the timeout settings for their batch jobs.
  • Pipelines: Users can create workflows that automate the steps involved in building, deploying, and monitoring their models. Users can use pre-built steps or create custom steps using AWS Lambda functions or containers.

Users can also monitor the performance of their deployed models using Amazon SageMaker Model Monitor, which automatically tracks key metrics such as accuracy, latency, throughput, and error rates. Users can also set up alerts and notifications for any anomalies or deviations from their expected performance.

Benefits of Amazon SageMaker Canvas

It offers several benefits for business analysts who want to leverage ML for their use cases, such as:

  • No-code: Users do not need to write any code or have any ML experience to use Amazon SageMaker Canvas. They can use a simple and intuitive interface to build and deploy ML models with ease.
  • Accuracy: Users can access ready-to-use models powered by Amazon AI services, such as Amazon Rekognition, Amazon Textract, and Amazon Comprehend, that offer high-quality predictions for common use cases. Users can also build custom models trained on their own data that are optimized for their specific needs.
  • Speed: Users can build and deploy ML models in minutes using Amazon SageMaker Canvas. They can also leverage the scalability and reliability of AWS to run large-scale predictions with low latency and high availability.
  • Collaboration: Users can boost collaboration between business analysts and data scientists by sharing, reviewing, and updating ML models across tools. Users can also import ML models from anywhere and generate predictions on them in Amazon SageMaker Canvas.

How to get started?

To get started, users need to have an AWS account and access to the AWS Management Console. Users can then navigate to the Amazon SageMaker service page and select Amazon SageMaker Canvas from the left navigation pane. Users can then choose from different options to start using Amazon SageMaker Canvas:

  • Use Ready-to-use models: Users can select a ready-to-use model for their use case, such as sentiment analysis, object detection in images, or document analysis. They can then upload their data and generate predictions with a single click.
  • Build a custom model: Users can import their data from one or more data sources, such as Amazon S3 buckets, Amazon Athena tables, or CSV files. They can then select the value they want to predict and create an ML model with a few clicks. They can also explore their data and analyze their model’s performance before generating predictions.
  • Import a model: Users can import an ML model from anywhere, such as Amazon SageMaker Studio or another tool. They can then generate predictions on the imported model without writing any code.

Users can also deploy their models to production environments and monitor their performance using Operationalize ML Models in Production feature.

Conclusion

Amazon SageMaker Canvas is a new no-code machine learning platform that allows business analysts to generate accurate ML predictions without writing any code or requiring any ML expertise. It offers several benefits, such as accuracy, speed, and collaboration, for users who want to leverage ML for their use cases. It also enables users to easily deploy their models to production environments and monitor their performance using Operationalize ML Models in Production feature. Users can get started with Amazon SageMaker Canvas by accessing it from the AWS Management Console and choosing from different options to use ready-to-use models, build custom models, or import models from anywhere.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

AWS Database Migration Service

AWS Database Migration Service: Seamless Migration

AWS Database Migration Service (AWS DMS) is a cloud service that makes it possible to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of cloud and on-premises setups. In this blog post, we will explain what you can do and how AWS Database Migration Service helps in seamless migration.

Overview of AWS Database Migration Service

AWS DMS is a managed and automated migration service that provides a quick and secure way to migrate databases from on-premise databases, DB instances, or databases running on EC2 instances to the cloud. It helps you modernize, migrate, and manage your environments in the AWS cloud. AWS DMS supports migration between 20-plus database and analytics engines, such as Oracle to Amazon Aurora MySQL-Compatible Edition , MySQL to Amazon Relational Database (RDS) for MySQL , Microsoft SQL Server to Amazon Aurora PostgreSQL-Compatible Edition, MongoDB to Amazon DocumentDB (with MongoDB compatibility) , Oracle to Amazon Redshift, and Amazon Simple Storage Service (S3).

AWS DMS also supports homogeneous and heterogeneous database migrations, meaning you can migrate to the same or a different database engine. For example, you can migrate from Oracle to Oracle, or from Oracle to PostgreSQL. AWS DMS takes care of many of the difficult or tedious tasks involved in a migration project, such as capacity analysis, hardware and software provisioning, installation and administration, testing and debugging, and ongoing data replication and monitoring.

At a basic level, AWS DMS is a server in the AWS Cloud that runs replication software. You create a source and target connection to tell AWS DMS where to extract from and load to. Then you schedule a task that runs on this server to move your data. AWS DMS creates the tables and associated primary keys if they don’t exist on the target. You can create the target tables yourself if you prefer. Or you can use AWS Schema Conversion Tool (AWS SCT) to create some or all of the target tables, indexes, views, triggers, and so on.

What You Can Do with AWS DMS

With AWS DMS, you can perform various migration scenarios, such as:

  • Move to managed databases: Migrate from legacy or on-premises databases to managed cloud services through a simplified migration process, removing undifferentiated database management tasks.
  • Remove licensing costs and accelerate business growth: Modernize to purpose-built databases to innovate and build faster for any use case at scale for one-tenth the cost.
  • Replicate ongoing changes: Create redundancies of business-critical databases and data stores to minimize downtime and protect against any data loss.
  • Improve integration with data lakes: Build data lakes and perform real-time processing on change data from your data stores.

Benefits of using AWS DMS

  • Trusted by customers globally: AWS DMS has been used by thousands of customers across various industries to securely migrate over 1 million databases with minimal downtime.
  • Supports multiple sources and targets: AWS DMS supports migration from 20-plus database and analytics engines, including both commercial and open-source options.
  • Maintains high availability and minimal downtime: AWS DMS supports Multi-AZ deployments and ongoing data replication and monitoring to ensure high availability and minimal downtime during the migration process.
  • Low cost and pay-as-you-go pricing: AWS DMS charges only for the compute resources and additional log storage used during the migration process.
  • Easy to use and scalable: AWS DMS provides a simple web-based console and API to create and manage your migration tasks. You can also scale up or down your replication instances as needed.

How AWS DMS Helps in Seamless Migration

AWS DMS helps you migrate your data seamlessly by providing the following features:

  • Discovery: You can use DMS Fleet Advisor to discover your source data infrastructure, such as servers, databases, and schemas that you can migrate to the AWS Cloud.
  • Schema conversion: You can use AWS SCT or download it to your local PC to automatically assess and convert your source schemas to a new target engine. You can also use AWS SCT to generate reports on compatibility issues and recommendations for optimization.
  • Data migration: You can use AWS DMS to migrate your data from your source to your target with minimal disruption. You can perform one-time migrations or replicate ongoing changes to keep sources and targets in sync.
  • Validation: You can use AWS SCT or download it to your local PC to validate the data integrity and performance of your migrated data. You can also use AWS SCT to compare the source and target schemas and data.
  • Conclusion

AWS DMS is a powerful and flexible service that enables you to migrate your databases and data stores to the AWS Cloud or between different cloud and on-premises setups. It supports a wide range of database and analytics engines, both homogeneous and heterogeneous. It also provides features such as discovery, schema conversion, data migration, validation, and replication to help you migrate your data seamlessly and securely.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Amazon Kendra Retrieval API

Amazon Kendra Retrieval API: A New Feature

Amazon Kendra as a Retriever to Build Retrieval Augmented Generation (RAG) Systems

Amazon Kendra Retrieval API: Overview

Retrieval augmented generation (RAG) is a technique that uses generative artificial intelligence (AI) to build question-answering applications. RAG systems have two components: a retriever and a large language model (LLM). Given a query, the retriever identifies the most relevant chunks of text from a corpus of documents and feeds it to the LLM to provide the most useful answer. Then, the LLM analyzes the relevant text chunks or passages and generates a comprehensive response for the query.

Amazon Kendra is a fully managed service that provides out-of-the-box semantic search capabilities for state-of-the-art ranking of documents and passages. You can use Amazon Kendra as a retriever for RAG systems. It can source the most relevant content and documents from your enterprise data to maximize the quality of your RAG payload. Hence, yielding better LLM responses than conventional or keyword-based search solutions.

This blog post will show you how to use Amazon Kendra as a retriever for RAG systems, with its application and benefits.

Amazon Kendra Retrieval API: Steps

To use Amazon Kendra as a retriever for RAG systems, you need to do the following steps:

  1. Create an index in Amazon Kendra and add your data sources. You can use pre-built connectors to popular data sources such as Amazon Simple Storage Service (Amazon S3), SharePoint, Confluence, and websites. You can also support common document formats such as HTML, Word, PowerPoint, PDF, Excel, and pure text files.
  2. Use the Retrieve API to retrieve the top 100 most relevant passages from documents in your index for a given query. The Retrieve API looks at chunks of text or excerpts referred to as passages and returns them using semantic search. Semantic search considers the search query’s context plus all the available information from the indexed documents. You can also override boosting at the index level, filter based on document fields or attributes. Filter based on the user or their group access to documents and include certain areas in the response that might provide useful additional information.
  3. Send the retrieved passages along with the query as a prompt to the LLM of your choice. The LLM will use the passages as context to generate a natural language answer for the query.

Benefits

Using Amazon Kendra as a retriever for RAG systems has several benefits:

  • You can leverage the high-accuracy search in Amazon Kendra to retrieve the most relevant passages from your enterprise data, improving the accuracy and quality of your LLM responses.
  • You can use Amazon Kendra’s deep learning search models that are pre-trained on 14 domains and don’t require any machine learning expertise. So, there’s no need to deal with word embeddings, document chunking, and other lower-level complexities typically required for RAG implementations.
  • You can easily integrate Amazon Kendra with various LLMs, such as those available soon via Amazon Bedrock and Amazon Titan. Transforming how developers and enterprises can solve traditionally complex challenges related to natural language processing and understanding.

Conclusion

In this blog post, we showed you how to use Amazon Kendra as a retriever to build retrieval augmented generation (RAG) systems. We explained what RAG is, how it works, how to use Amazon Kendra as a retriever, and its benefits. We hope you find this blog post useful and informative.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Close Bitnami banner
Bitnami