Amazon DynamoDB Local

Amazon DynamoDB Local v2.0: What’s New

Learn About Amazon DynamoDB local version 2.0

Amazon DynamoDB is a NoSQL database service that is fully managed and guarantees speedy and consistent performance while also being seamlessly scalable. It allows you to store and query any data without worrying about servers, provisioning, or maintenance. But what if you want to develop and test your applications locally without accessing the DynamoDB web service? That’s where Amazon DynamoDB local comes in handy.

What is Amazon DynamoDB local?

Amazon DynamoDB local is a downloadable version of Amazon DynamoDB that you can run on your computer. It simulates the DynamoDB web service so that you can use it with your existing DynamoDB API calls.

It is ideal for development and testing, as it helps you save on throughput, data storage, and data transfer fees. In addition, you don’t need an internet connection while you work on your application. You can use it with any supported SDKs, such as Java, Python, Node.js, Ruby, .NET, PHP, and Go. You can also use it with the AWS CLI or the AWS Toolkit for Visual Studio.

What’s New in Amazon DynamoDB Local version 2.0?

Amazon DynamoDB local version 2.0 was released on July 5, 2023. It has some important changes and improvements that you should know about.

Migration to jakarta.* namespace

The most significant change is the migration to use the jakarta.* namespace instead of the javax.* namespace. This means that Java developers can now use Amazon DynamoDB local with Spring Boot 3 and frameworks such as Spring Framework 6 and Micronaut Framework 4 to build modernized, simplified, and lightweight cloud-native applications.

The jakarta.* namespace is part of the Jakarta EE project, which is the successor of Java EE. Jakarta EE aims to provide a platform for developing enterprise applications using Java technologies.

Suppose you are using Java SDKs or tools that rely on the javax.* namespace, you will need to update them to use the jakarta.* namespace before using Amazon DynamoDB local version 2.0. For more information, see Migrating from javax.* to jakarta.*.

Updated Access Key ID convention

Another change is the updated convention for the Access Key ID when using Amazon DynamoDB local. The new convention specifies that the AWS_ACCESS_KEY_ID can only contain letters (A–Z, a–z) and numbers (0–9).

This change was made to align with the Access Key ID convention for the DynamoDB web service, which also only allows letters and numbers. This helps avoid confusion and errors when switching between Amazon DynamoDB local and the DynamoDB web service.

If you use an Access Key ID containing other characters, such as dashes (-) or underscores (_), you must change it before using version 2.0. For more information, see Troubleshooting “The Access Key ID or Security Token is Invalid” Error After Upgrading DynamoDB Local to Version 2.0 or Greater.

Bug fixes and performance improvements

It also includes several bug fixes and performance improvements that enhance the stability and usability.

For example, one of the bug fixes addresses an issue where version 1.19.0 had an empty jar file in its repository, causing errors when downloading or running it. This issue has been resolved in version 2.0.

Getting Started with Amazon DynamoDB local version 2.0

  • Getting started is easy and free. You can download it from Deploying DynamoDB locally on your computer and follow the instructions to install and run it on your preferred operating system (macOS, Linux, or Windows).
  • You can also use as an Apache Maven dependency or as a Docker image if you prefer those options.
  • Once you have Amazon DynamoDB local running on your computer, you can use any of the supported SDKs, tools, or frameworks to develop and test your applications locally.

Conclusion

Amazon DynamoDB local version 2.0 is a great way to develop and test your applications locally without accessing the DynamoDB web service. It has some important changes and improvements that make it compatible with the latest Java technologies and conventions. Suppose you are a Java developer who wants to use it with Spring Boot 3 or other frameworks that use the jakarta.* namespace, you should upgrade to version 2.0 as soon as possible.

If you are using other SDKs or tools that rely on the javax.* namespace or an Access Key ID containing other characters, you will need to update them before using. It is free to download and use, and it works with your existing DynamoDB API calls. You can start with it today by downloading it from Deploying DynamoDB locally on your computer.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Creating Azure Active Directory: Simplifying Identity Management in the Cloud

Microsoft Entra ID: The New Name for Azure AD

Microsoft Entra ID: What’s in a name?

Microsoft has recently announced that it will rebrand its popular cloud-based identity and access management service, Azure Active Directory, as Microsoft Entra ID. This change will take effect in early 2024, and will affect all existing and new customers of the service. But why did Microsoft decide to change the name of such a well-known and widely used product? And what does Entra ID mean?

In this article, we will explore the reasons behind this rebranding and how it reflects Microsoft’s vision and strategy for the future of identity and security in the cloud.

Why change the name?

Azure Active Directory, or AAD for short, was launched in 2010 as a cloud-based version of Microsoft’s on-premises Active Directory service. It provides identity and access management for Windows-based networks. AAD enables users to sign in and access applications and resources across Microsoft’s cloud platform, Azure, as well as third-party services that integrate with AAD. AAD also offers features such as multi-factor authentication, single sign-on, conditional access, identity protection, and more.

Over the years, AAD has become one of the world’s most popular and trusted cloud identity services, with over 400 million active users and over 30 billion authentication requests per day. AAD supports over 3,000 pre-integrated applications and is used by over 90% Fortune 500 companies.

However, Microsoft realized that the name Azure Active Directory no longer accurately reflects the scope and capabilities of the service. As Microsoft’s cloud platform evolved, so did AAD. It is not just a directory service for Azure anymore. AAD is a comprehensive identity platform that works across multiple clouds, hybrid environments, and devices. It is also not just an extension of Active Directory anymore. It is a modern and innovative service which leverages machine learning, artificial intelligence, and blockchain to provide secure and seamless identity experiences for users and organizations.

Therefore, Microsoft decided to rename AAD as Microsoft Entra ID, to better communicate its value proposition and differentiation in the market.

What does Entra ID mean?

Microsoft Entra ID is a combination of two words: Entra and ID. Entra is derived from the Latin “intrare”, meaning “to enter”. ID is an abbreviation for “identity”. Entra ID signifies Microsoft’s mission to enable users to enter any application or resource with their identity, regardless of where they are or what device they use.

Microsoft Entra ID also conveys Microsoft’s vision to empower users and organizations with intelligent and adaptive identity solutions that enhance security, productivity, and collaboration in the cloud era.

What are the benefits of Entra ID?

Microsoft Entra ID will offer the same features and functionality as AAD, but with a new name and logo that align with Microsoft’s brand identity and design language. Customers using AAD today will not need to change their configurations or integrations. They will simply see the new name and logo in their portals, documentation, and communications from Microsoft.

However, Microsoft Entra ID will also bring some new benefits to customers, such as:

  • A simplified and consistent naming scheme across Microsoft’s cloud services. For example, instead of Azure AD B2C (Business to Consumer), customers will see Microsoft Entra ID B2C. Instead of Azure AD B2B (Business to Business), customers will see Microsoft Entra ID B2B.
  • A unified and integrated identity experience across Microsoft’s cloud offerings. For example, customers using Microsoft 365, Dynamics 365, Power Platform, or other Microsoft cloud services can manage their identities using Entra ID as a single-entry point.
  • A more flexible and extensible identity platform that can support new scenarios and use cases in the future. For example, customers can leverage Entra ID’s capabilities for decentralized identity using blockchain technology or for verifiable credentials using digital certificates.

Conclusion

Microsoft Entra ID is more than just a name change. It reflects Microsoft’s commitment to delivering innovative and secure identity solutions for the cloud era. By rebranding AAD as Entra ID, Microsoft aims to simplify its messaging, unify its identity offerings, and extend its platform for new opportunities and challenges.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Amazon SageMaker Canvas

Amazon SageMaker Canvas: What’s New

Amazon SageMaker Canvas Operationalize ML Models in Production

Amazon SageMaker Canvas is a new no-code machine learning platform that allows business analysts to generate accurate ML predictions without writing any code or requiring any ML expertise. It was launched at the AWS re:Invent 2021 conference and is built on the capabilities of Amazon SageMaker, the comprehensive ML service from AWS.

What is Amazon SageMaker Canvas?

Amazon SageMaker Canvas is a visual, point-and-click interface that enables users to access ready-to-use models or create custom models for a variety of use cases, such as:

  • Detect sentiment in free-form text
  • Extract information from documents
  • Identify objects and text in images
  • Predict customer churn
  • Plan inventory efficiently
  • Optimize price and revenue
  • Improve on-time deliveries
  • Classify text or images based on custom categories

Users can import data from disparate sources, select values they want to predict, automatically prepare and explore data, and create an ML model with a few clicks. They can also run what-if analysis and generate single or bulk predictions with the model. Additionally, they can collaborate with data scientists by sharing, reviewing, and updating ML models across tools. Users can also import ML models from anywhere and generate predictions directly in Amazon SageMaker Canvas.

What is Operationalize ML Models in Production?

Operationalize ML Models in Production is a new feature of Amazon SageMaker Canvas that allows users to easily deploy their ML models to production environments and monitor their performance. Users can choose from different deployment options, such as:

  • Real-time endpoints: Users can create scalable and secure endpoints that can serve real-time predictions from their models. Users can also configure auto-scaling policies, encryption settings, access control policies, and logging options for their endpoints.
  • Batch transformations: Users can run batch predictions on large datasets using their models. Users can specify the input and output locations, the number of parallel requests, and the timeout settings for their batch jobs.
  • Pipelines: Users can create workflows that automate the steps involved in building, deploying, and monitoring their models. Users can use pre-built steps or create custom steps using AWS Lambda functions or containers.

Users can also monitor the performance of their deployed models using Amazon SageMaker Model Monitor, which automatically tracks key metrics such as accuracy, latency, throughput, and error rates. Users can also set up alerts and notifications for any anomalies or deviations from their expected performance.

Benefits of Amazon SageMaker Canvas

It offers several benefits for business analysts who want to leverage ML for their use cases, such as:

  • No-code: Users do not need to write any code or have any ML experience to use Amazon SageMaker Canvas. They can use a simple and intuitive interface to build and deploy ML models with ease.
  • Accuracy: Users can access ready-to-use models powered by Amazon AI services, such as Amazon Rekognition, Amazon Textract, and Amazon Comprehend, that offer high-quality predictions for common use cases. Users can also build custom models trained on their own data that are optimized for their specific needs.
  • Speed: Users can build and deploy ML models in minutes using Amazon SageMaker Canvas. They can also leverage the scalability and reliability of AWS to run large-scale predictions with low latency and high availability.
  • Collaboration: Users can boost collaboration between business analysts and data scientists by sharing, reviewing, and updating ML models across tools. Users can also import ML models from anywhere and generate predictions on them in Amazon SageMaker Canvas.

How to get started?

To get started, users need to have an AWS account and access to the AWS Management Console. Users can then navigate to the Amazon SageMaker service page and select Amazon SageMaker Canvas from the left navigation pane. Users can then choose from different options to start using Amazon SageMaker Canvas:

  • Use Ready-to-use models: Users can select a ready-to-use model for their use case, such as sentiment analysis, object detection in images, or document analysis. They can then upload their data and generate predictions with a single click.
  • Build a custom model: Users can import their data from one or more data sources, such as Amazon S3 buckets, Amazon Athena tables, or CSV files. They can then select the value they want to predict and create an ML model with a few clicks. They can also explore their data and analyze their model’s performance before generating predictions.
  • Import a model: Users can import an ML model from anywhere, such as Amazon SageMaker Studio or another tool. They can then generate predictions on the imported model without writing any code.

Users can also deploy their models to production environments and monitor their performance using Operationalize ML Models in Production feature.

Conclusion

Amazon SageMaker Canvas is a new no-code machine learning platform that allows business analysts to generate accurate ML predictions without writing any code or requiring any ML expertise. It offers several benefits, such as accuracy, speed, and collaboration, for users who want to leverage ML for their use cases. It also enables users to easily deploy their models to production environments and monitor their performance using Operationalize ML Models in Production feature. Users can get started with Amazon SageMaker Canvas by accessing it from the AWS Management Console and choosing from different options to use ready-to-use models, build custom models, or import models from anywhere.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

AWS Database Migration Service

AWS Database Migration Service: Seamless Migration

AWS Database Migration Service (AWS DMS) is a cloud service that makes it possible to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of cloud and on-premises setups. In this blog post, we will explain what you can do and how AWS Database Migration Service helps in seamless migration.

Overview of AWS Database Migration Service

AWS DMS is a managed and automated migration service that provides a quick and secure way to migrate databases from on-premise databases, DB instances, or databases running on EC2 instances to the cloud. It helps you modernize, migrate, and manage your environments in the AWS cloud. AWS DMS supports migration between 20-plus database and analytics engines, such as Oracle to Amazon Aurora MySQL-Compatible Edition , MySQL to Amazon Relational Database (RDS) for MySQL , Microsoft SQL Server to Amazon Aurora PostgreSQL-Compatible Edition, MongoDB to Amazon DocumentDB (with MongoDB compatibility) , Oracle to Amazon Redshift, and Amazon Simple Storage Service (S3).

AWS DMS also supports homogeneous and heterogeneous database migrations, meaning you can migrate to the same or a different database engine. For example, you can migrate from Oracle to Oracle, or from Oracle to PostgreSQL. AWS DMS takes care of many of the difficult or tedious tasks involved in a migration project, such as capacity analysis, hardware and software provisioning, installation and administration, testing and debugging, and ongoing data replication and monitoring.

At a basic level, AWS DMS is a server in the AWS Cloud that runs replication software. You create a source and target connection to tell AWS DMS where to extract from and load to. Then you schedule a task that runs on this server to move your data. AWS DMS creates the tables and associated primary keys if they don’t exist on the target. You can create the target tables yourself if you prefer. Or you can use AWS Schema Conversion Tool (AWS SCT) to create some or all of the target tables, indexes, views, triggers, and so on.

What You Can Do with AWS DMS

With AWS DMS, you can perform various migration scenarios, such as:

  • Move to managed databases: Migrate from legacy or on-premises databases to managed cloud services through a simplified migration process, removing undifferentiated database management tasks.
  • Remove licensing costs and accelerate business growth: Modernize to purpose-built databases to innovate and build faster for any use case at scale for one-tenth the cost.
  • Replicate ongoing changes: Create redundancies of business-critical databases and data stores to minimize downtime and protect against any data loss.
  • Improve integration with data lakes: Build data lakes and perform real-time processing on change data from your data stores.

Benefits of using AWS DMS

  • Trusted by customers globally: AWS DMS has been used by thousands of customers across various industries to securely migrate over 1 million databases with minimal downtime.
  • Supports multiple sources and targets: AWS DMS supports migration from 20-plus database and analytics engines, including both commercial and open-source options.
  • Maintains high availability and minimal downtime: AWS DMS supports Multi-AZ deployments and ongoing data replication and monitoring to ensure high availability and minimal downtime during the migration process.
  • Low cost and pay-as-you-go pricing: AWS DMS charges only for the compute resources and additional log storage used during the migration process.
  • Easy to use and scalable: AWS DMS provides a simple web-based console and API to create and manage your migration tasks. You can also scale up or down your replication instances as needed.

How AWS DMS Helps in Seamless Migration

AWS DMS helps you migrate your data seamlessly by providing the following features:

  • Discovery: You can use DMS Fleet Advisor to discover your source data infrastructure, such as servers, databases, and schemas that you can migrate to the AWS Cloud.
  • Schema conversion: You can use AWS SCT or download it to your local PC to automatically assess and convert your source schemas to a new target engine. You can also use AWS SCT to generate reports on compatibility issues and recommendations for optimization.
  • Data migration: You can use AWS DMS to migrate your data from your source to your target with minimal disruption. You can perform one-time migrations or replicate ongoing changes to keep sources and targets in sync.
  • Validation: You can use AWS SCT or download it to your local PC to validate the data integrity and performance of your migrated data. You can also use AWS SCT to compare the source and target schemas and data.
  • Conclusion

AWS DMS is a powerful and flexible service that enables you to migrate your databases and data stores to the AWS Cloud or between different cloud and on-premises setups. It supports a wide range of database and analytics engines, both homogeneous and heterogeneous. It also provides features such as discovery, schema conversion, data migration, validation, and replication to help you migrate your data seamlessly and securely.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Amazon Kendra Retrieval API

Amazon Kendra Retrieval API: A New Feature

Amazon Kendra as a Retriever to Build Retrieval Augmented Generation (RAG) Systems

Amazon Kendra Retrieval API: Overview

Retrieval augmented generation (RAG) is a technique that uses generative artificial intelligence (AI) to build question-answering applications. RAG systems have two components: a retriever and a large language model (LLM). Given a query, the retriever identifies the most relevant chunks of text from a corpus of documents and feeds it to the LLM to provide the most useful answer. Then, the LLM analyzes the relevant text chunks or passages and generates a comprehensive response for the query.

Amazon Kendra is a fully managed service that provides out-of-the-box semantic search capabilities for state-of-the-art ranking of documents and passages. You can use Amazon Kendra as a retriever for RAG systems. It can source the most relevant content and documents from your enterprise data to maximize the quality of your RAG payload. Hence, yielding better LLM responses than conventional or keyword-based search solutions.

This blog post will show you how to use Amazon Kendra as a retriever for RAG systems, with its application and benefits.

Amazon Kendra Retrieval API: Steps

To use Amazon Kendra as a retriever for RAG systems, you need to do the following steps:

  1. Create an index in Amazon Kendra and add your data sources. You can use pre-built connectors to popular data sources such as Amazon Simple Storage Service (Amazon S3), SharePoint, Confluence, and websites. You can also support common document formats such as HTML, Word, PowerPoint, PDF, Excel, and pure text files.
  2. Use the Retrieve API to retrieve the top 100 most relevant passages from documents in your index for a given query. The Retrieve API looks at chunks of text or excerpts referred to as passages and returns them using semantic search. Semantic search considers the search query’s context plus all the available information from the indexed documents. You can also override boosting at the index level, filter based on document fields or attributes. Filter based on the user or their group access to documents and include certain areas in the response that might provide useful additional information.
  3. Send the retrieved passages along with the query as a prompt to the LLM of your choice. The LLM will use the passages as context to generate a natural language answer for the query.

Benefits

Using Amazon Kendra as a retriever for RAG systems has several benefits:

  • You can leverage the high-accuracy search in Amazon Kendra to retrieve the most relevant passages from your enterprise data, improving the accuracy and quality of your LLM responses.
  • You can use Amazon Kendra’s deep learning search models that are pre-trained on 14 domains and don’t require any machine learning expertise. So, there’s no need to deal with word embeddings, document chunking, and other lower-level complexities typically required for RAG implementations.
  • You can easily integrate Amazon Kendra with various LLMs, such as those available soon via Amazon Bedrock and Amazon Titan. Transforming how developers and enterprises can solve traditionally complex challenges related to natural language processing and understanding.

Conclusion

In this blog post, we showed you how to use Amazon Kendra as a retriever to build retrieval augmented generation (RAG) systems. We explained what RAG is, how it works, how to use Amazon Kendra as a retriever, and its benefits. We hope you find this blog post useful and informative.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Amazon Elastic Container Service

Amazon Elastic Container Service: What’s New

Explore the Latest Advancements in Amazon Elastic Container Service

Amazon Elastic Container Service: Introduction

In the dynamic world of cloud computing, staying at the forefront of technology is essential. Amazon Elastic Container Service (Amazon ECS) has been revolutionizing container orchestration, offering developers an efficient and scalable platform. In June 2023, Amazon ECS introduced several exciting features and improvements, enhancing its capabilities and empowering developers to streamline their containerized applications. Let’s delve into the latest advancements and understand how the new release helps in building and managing containerized environments.

What’s New in Amazon Elastic Container Service?

Enhanced Scalability and Performance

Amazon ECS has always been known for its scalability, and the June 2023 update takes it a step further. The latest release introduces an enhanced scaling engine that optimizes the management of container instances. It leverages advanced algorithms to scale up or down based on workload demands, ensuring optimal resource utilization and cost efficiency. This feature enables developers to handle sudden traffic spikes and effectively manage workloads in a highly dynamic environment.

Improved Application Monitoring and Insights

Monitoring and gaining insights into containerized applications are vital for efficient management. The June update of Amazon ECS introduces new monitoring capabilities, allowing developers to collect and analyze essential metrics through Amazon CloudWatch. With this enhanced monitoring, developers can track resource utilization, and application performance, and set alarms to detect anomalies. These insights enable proactive troubleshooting and better decision-making, ultimately leading to improved application performance and user experience.

Enhanced Security and Compliance

Security is of paramount importance in any cloud infrastructure. Amazon ECS understands this and has introduced new security features in the June update. Enhanced integration with AWS Identity and Access Management (IAM) now allows developers to define granular permissions for container instances, tasks, and services. This ensures that only authorized personnel can access and modify critical resources, enhancing the overall security posture of containerized applications. Additionally, the update introduces automatic encryption at rest for container images, adding an extra layer of protection to sensitive data.

Simplified Application Deployment

Deploying containerized applications can sometimes be a complex process. However, Amazon ECS aims to simplify this aspect for developers. The latest release introduces a new deployment wizard that guides users through the process, making it more intuitive and hassle-free. With this wizard, developers can define deployment strategies, manage rollbacks, and automate application updates. This simplification of the deployment process enables faster time to market and enhances developer productivity.

Enhanced Integration and Extensibility

Integration with other AWS services is crucial for building comprehensive and scalable applications. Amazon ECS has introduced enhanced integrations with AWS Fargate, AWS App Mesh, and AWS PrivateLink in the June update. These integrations give developers more flexibility in configuring networking, managing microservices, and securely accessing containerized applications. Furthermore, the update also includes expanded support for popular container orchestrators like Kubernetes, empowering developers with additional options and flexibility.

Amazon Elastic Container Service: Conclusion

Amazon (ECS) continues to evolve and provide developers with powerful tools to build, deploy, and manage containerized applications at scale. The June 2023 update brings several new features and improvements that enhance scalability, security, monitoring, deployment, and integration capabilities. With these advancements, developers can optimize resource utilization, gain better insights into application performance, enhance security, simplify deployment, and seamlessly integrate with other AWS services. By leveraging these new features, developers can unlock the true potential of containerization and deliver robust and scalable applications in a rapidly changing cloud landscape.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Azure OpenAI Service

Azure OpenAI Service: Key Benefits and Features

Azure OpenAI Service: A Powerful Platform for Building AI Solutions

Overview

Artificial intelligence (AI) revolutionizes industries worldwide. However, AI development can be challenging for businesses lacking expertise, resources, or infrastructure. That’s where the Azure OpenAI Service comes in.

It’s a cloud-based platform enabling easy building, training, and deployment of AI models using OpenAI’s cutting-edge technology. With Azure OpenAI Service, access powerful AI capabilities like natural language processing, computer vision, speech recognition, and generative models, worry-free.

Features and Benefits

The Service offers several advantages:

  • Pre-trained models: Access various pre-trained models for tasks like text summarization, sentiment analysis, and image captioning. Use them as-is or fine-tune to your needs.
  • Custom models: Create custom models with OpenAI Codex, generating code, text, images, and more from natural language inputs. Build apps, websites, games, with minimal effort.
  • Scalable and secure infrastructure: Runs on Azure cloud, ensuring scalable and secure AI projects. Easily adjust compute and storage resources while benefiting from Azure’s reliability and security.
  • Integration and collaboration: Seamlessly integrates with Azure Machine Learning, Azure Cognitive Services, Azure Data Factory, and Visual Studio Code. Collaborate via the web-based OpenAI Playground, experimenting and sharing results.

Azure OpenAI Service: Preview Mode

It is currently in preview mode and is available by invitation only. If you are interested in trying out this service, you can request access here: https://azure.microsoft.com/en-us/services/openai/

It is a powerful platform for building AI solutions that can help you solve your business problems and achieve your goals. Whether you want to enhance your customer experience, optimize your operations, or innovate your products and services, it can help you do it faster and easier than ever before.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Streamlining Data Transformation: Azure Data Explorer's New DropMappedField Feature

DropMappedField in Azure Data Explorer: New Feature

Improving Data Analysis Efficiency with DropMappedField in Azure Data Explorer

DropMappedField in Azure Data Explorer: Overview

This blog post explains what DropMappedField is in Azure Data Explorer, its key features, and advantages.

DropMappedField is a data mapping transformation enabling JSON object-to-column mapping and removal of nested fields referenced by other mappings. This simplifies data ingestion, reduces storage consumption, and enhances query performance.

Key Features

Azure Data Explorer is a powerful data analytics service that allows ingestion, storage, and querying of massive volumes of structured, semi-structured, and unstructured data. It excels in ingesting diverse data sources and formats like JSON, CSV, Parquet, Avro, and more.

However, not all data formats are equally suitable for analysis. For example, JSON documents can have complex nested structures that make it hard to extract the relevant information and organize it into columns. To solve this problem, Azure Data Explorer provides data mappings, which are rules that define how to transform the ingested data into a tabular format.

In addition, Azure Data Explorer supports the data mapping transformation called DropMappedField. This transformation empowers you to map an object in a JSON document to a column and remove any nested fields that other column mappings reference. For example, consider the following JSON document:


{
  "name": "Alice",
  "age": 25,
  "address": {
    "city": "Seattle",
    "state": "WA",
    "zip": 98101
  }
}

If you want to map this document to a table with four columns: name, age, city, and state, you can use the following data mapping:


.create table MyTable (name: string, age: int, city: string, state: string)
.create table MyTable ingestion json mapping 'MyMapping' '[{"column":"name","path":"$.name"},{"column":"age","path":"$.age"},{"column":"city","path":"$.address.city"},{"column":"state","path":"$.address.state"},{"column":"address","path":"$.address","transform":"DropMappedField"}]'

Notice that the last column mapping employs the DropMappedField transformation. It maps the address object to a column and removes the city and state fields, already mapped to other columns. This approach prevents data duplication and conserves storage space.

Advantages of DropMappedField

The DropMappedField transformation offers several advantages:

  • It simplifies data ingestion by enabling mapping of complex JSON objects to columns without specifying each nested field.
  • Reduces storage consumption by eliminating redundant data unnecessary for analysis.
  • Improves query performance by reducing the number of columns and fields that require scanning.

Microsoft Fabric’s Real-Time Analytics incorporates the DropMappedField transformation as a feature. The platform supports analysis and ingestion of streaming data from diverse sources like web apps, social media, and IoT devices.

Conclusion: DropMappedField in Azure Data Explorer

DropMappedField is a valuable feature for optimizing data ingestion and analysis in Azure Data Explorer. Efficiently mapping JSON objects to columns and eliminating redundant nested fields is a highly effective method. This approach drastically reduces the time, effort, and resources required to handle even the most complex and extensive JSON data.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Azure HX Virtual Machines

Azure HX Virtual Machines for HPC

Introducing Azure HX Virtual Machines for High-Performance Computing (HPC)

Azure HX VMs: Overview

Azure HX Virtual Machines for HPC are a new series of VMs designed for high-performance computing (HPC) workloads. They offer high CPU performance, large memory capacity, and fast interconnects for parallel and distributed applications. Azure HX VMs are ideal for applications that require high compute density, memory bandwidth, and storage throughput, such as big data analytics, scientific computing, and video processing.

Capabilities

Some of the capabilities of Azure HX VMs are:

  • Processors are built on the 3rd Generation Intel Xeon Scalable platform (Ice Lake), offering up to 40 cores and 80 threads per socket. They also support AVX-512 instructions to boost vector operations.
  • Each virtual machine is equipped with a maximum of 512 GB RAM and a local NVMe SSD storage capacity of up to 4 TB, ensuring speedy data retrieval.
  • Mellanox ConnectX-6 Dx adapters use RoCE v2 for fast communication between virtual machines.
  • Integrated with Azure CycleCloud, simplifying the deployment and management of HPC clusters on Azure.
  • Compatible with various HPC software and frameworks, such as MPI, OpenMP, CUDA, TensorFlow, PyTorch, and more.

Azure HX VM Scenarios

You can use Azure HX VMs for various HPC scenarios, such as:

  • Computational fluid dynamics (CFD) involves simulating the flow of fluids and gases in complex systems, such as aircraft, cars, turbines, etc.
  • Computational chemistry involves modeling the structure and behavior of molecules and materials at the atomic level, such as drug discovery, catalysis, etc.
  • Computational biology involves analyzing large-scale biological data, such as genomics, proteomics, metabolomics, etc.
  • Artificial intelligence (AI) and machine learning (ML) involve training and running complex neural networks and algorithms for tasks such as image recognition, natural language processing, recommendation systems, etc.

Conclusion

Azure HX VMs are a powerful and flexible solution for running HPC workloads on Azure. They provide high performance, scalability, and reliability for a wide range of applications. Currently, Azure HX VMs are offering a preview in select regions. However, these are generally available in East US region.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Azure Cognitive and OpenAI Services

Azure Cognitive & OpenAI Services: Benefits

Azure Cognitive Services & Azure OpenAI: Key Benefits

Overview

Azure Cognitive and OpenAI Services are powerful cloud-based services that enable developers and data scientists to build intelligent applications with minimal coding and data science skills. In this blog post, we will explore how these services can benefit enterprises in various domains and scenarios.

Azure Cognitive & OpenAI Services

Moreover, Azure Cognitive Services is a bundle of APIs and SDKs for vision, speech, language, decision, and Azure OpenAI. These services enable seamless integration of various features like image analysis, face recognition, speech transcription, and more. You can use these services through REST APIs or client libraries in popular development languages.

Furthermore, Azure Open AI is a new service that provides access to the powerful GPT-3 language model from OpenAI. GPT-3 is a deep learning system that can generate natural language text on any topic, given some input. You can use Azure Open AI service to create conversational agents, generate summaries, write content, answer questions, and more.

Key Benefits

Some of the benefits of using Azure Cognitive Services and Azure Open AI service for enterprises, you can:

  • Accelerate your digital transformation by adding AI capabilities to your existing applications or creating new ones with minimal effort and cost.
  • Leverage the latest AI research and innovation from Microsoft and OpenAI without having to build and maintain your own models and infrastructure.
  • Deploy your AI solutions anywhere from the cloud to the edge with containers, ensuring scalability, security, and compliance.
  • Empower responsible use of AI with industry-leading tools and guidelines that help you protect user privacy and data sovereignty.

Latest Update

Additionally, the Azure OpenAI Service has recently become available in the UK South region, with gpt-35-turbo being supported upon launch. These services, coupled with Azure Cognitive Services, are formidable tools that can enable the development of intelligent applications capable of visual perception, auditory recognition, speech generation, language comprehension, and natural language text generation. Utilizing them can significantly enhance your application’s capabilities.

Take the Next Step: Embrace the Power of Cloud Services

Ready to take your organization to the next level with cloud services? Our team of experts can help you navigate the cloud landscape and find the solutions that best meet your needs. Contact us today to learn more and schedule a consultation.

Close Bitnami banner
Bitnami