Difference between revisions of "Amazon"

From
Jump to: navigation, search
m
m (Inferentia)
 
(12 intermediate revisions by the same user not shown)
Line 12: Line 12:
  
 
* [https://docs.aws.amazon.com/machine-learning/index.html Amazon Machine Learning Documentation]   
 
* [https://docs.aws.amazon.com/machine-learning/index.html Amazon Machine Learning Documentation]   
* [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)]]
+
* [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless]] ... [[Hugging Face]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]]
 
* [[Bedrock]]
 
* [[Bedrock]]
 
* [[Development#CodeWhisperer|CodeWhisperer]]
 
* [[Development#CodeWhisperer|CodeWhisperer]]
Line 28: Line 28:
 
* [https://aws.amazon.com/about-aws/events/monthlywebinarseries/on-demand/ On-Demand AWS Tech Talks]
 
* [https://aws.amazon.com/about-aws/events/monthlywebinarseries/on-demand/ On-Demand AWS Tech Talks]
 
* [https://aws.amazon.com/training/ AWS Training and Certification]
 
* [https://aws.amazon.com/training/ AWS Training and Certification]
* [[Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[LangChain]]
+
* [[Agents]] ... [[Robotic Process Automation (RPA)|Robotic Process Automation]] ... [[Assistants]] ... [[Personal Companions]] ... [[Personal Productivity|Productivity]] ... [[Email]] ... [[Negotiation]] ... [[LangChain]]
 
** [https://techcrunch.com/2018/02/12/amazon-may-be-developing-ai-chips-for-alexa/ Amazon Is Becoming an AI Chip Maker, Speeding Alexa Responses]
 
** [https://techcrunch.com/2018/02/12/amazon-may-be-developing-ai-chips-for-alexa/ Amazon Is Becoming an AI Chip Maker, Speeding Alexa Responses]
 
** [https://venturebeat.com/2019/09/25/amazon-unveils-echo-buds-alexa-enabled-earbuds-that-track-your-steps/ Amazon unveils Echo Buds, Alexa-enabled earbuds with noise reduction | Kyle Wiggers]
 
** [https://venturebeat.com/2019/09/25/amazon-unveils-echo-buds-alexa-enabled-earbuds-that-track-your-steps/ Amazon unveils Echo Buds, Alexa-enabled earbuds with noise reduction | Kyle Wiggers]
Line 41: Line 41:
 
** [https://www.amazon.science/research-areas/robotics Robotics]
 
** [https://www.amazon.science/research-areas/robotics Robotics]
 
** [https://www.amazon.science/research-areas/search-and-information-retrieval Search and information retrieval]
 
** [https://www.amazon.science/research-areas/search-and-information-retrieval Search and information retrieval]
 +
* [https://aws.amazon.com/blogs/aws/amazon-q-brings-generative-ai-powered-assistance-to-it-pros-and-developers-preview/ Amazon Q brings generative AI-powered assistance to IT pros and developers (preview) | Amazon]
 +
**[https://techcrunch.com/2023/11/28/amazon-unveils-q-an-ai-powered-chatbot-for-businesses/ Amazon unveils Q, an AI-powered chatbot for businesses | Kyle Wiggers - TechCrunch]
 +
* [https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment Amazon and Anthropic deepen their shared commitment to advancing generative AI | Amazon] ... using [[Anthropic]]'s [[Claude]] on Amazon [[Bedrock]]
  
 
_______________________________________________
 
_______________________________________________
 +
= Inferentia =
 +
* [https://aws.amazon.com/machine-learning/inferentia/ Why Inferentia? | AWS]
 +
* [https://finance.yahoo.com/news/did-amazon-just-checkmate-nvidia-221500449.html Did Amazon Just Say "Checkmate" to Nvidia? | Adam Spatacco - The Motley Fool]
  
== Integrated Components/Technologies ==
+
ChatGPT
 +
AWS Inferentia is a custom-designed machine learning inference chip developed by Amazon Web Services (AWS) to accelerate deep learning workloads. The chip is specifically optimized for high performance, low latency, and cost-effective inference, which is the process of running trained machine learning models to make predictions or classifications. By using AWS Inferentia, organizations can achieve faster and more cost-effective deployment of machine learning models for a variety of applications, including image and speech recognition, natural language processing, and recommendation engines. Key features and benefits of AWS Inferentia include:
 +
 
 +
* <b>High Performance: </b>Inferentia delivers high throughput and low latency, making it ideal for real-time applications. It supports multiple machine learning frameworks such as TensorFlow, PyTorch, and Apache MXNet.
 +
* <b>Cost Efficiency: </b> By providing a dedicated hardware solution for inference, Inferentia can reduce the cost of inference operations compared to using general-purpose CPUs or GPUs.
 +
* <b>Compatibility: </b> AWS Inferentia is integrated with Amazon SageMaker, AWS's fully managed machine learning service, and supports models trained on popular frameworks. This makes it easier for developers to deploy their existing models on Inferentia-based instances.
 +
* <b>Scalability: </b> It can be scaled to handle large-scale machine learning workloads, allowing users to deploy multiple models simultaneously or to serve a high volume of inference requests.
 +
* <b>Availability: </b> Inferentia-powered instances, such as the Inf1 instance type, are available on Amazon EC2. These instances are designed to provide optimal performance for inference applications.
 +
 
 +
 
 +
<youtube>2XUoDfdBoM8</youtube>
 +
<youtube>pokM1r3rgIg</youtube>
 +
 
 +
= Integrated Components/Technologies =
  
 
* [[Textract]] in the Elastic Stack Architecture
 
* [[Textract]] in the Elastic Stack Architecture
Line 67: Line 86:
 
* [https://aws.amazon.com/athena Athena] interactive query service to analyze data in Amazon S3 using standard SQL
 
* [https://aws.amazon.com/athena Athena] interactive query service to analyze data in Amazon S3 using standard SQL
 
* [https://aws.amazon.com/serverless/ run applications and services without thinking about servers]; [[Serverless]]
 
* [https://aws.amazon.com/serverless/ run applications and services without thinking about servers]; [[Serverless]]
* [https://aws.amazon.com/glue/ Glue] a fully managed extract, transform, and load (ETL) service to prepare and load data for analytics
+
* [https://aws.amazon.com/glue/ Glue] a fully managed extract, transform, and load (ETL) service to prepare and load data for [[analytics]]
 
** [https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html Crawlers] to populate the AWS Glue Data Catalog with tables
 
** [https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html Crawlers] to populate the AWS Glue Data Catalog with tables
 
* [[Management Console]] - manage web services
 
* [[Management Console]] - manage web services
Line 116: Line 135:
  
 
<img src="https://d1.awsstatic.com/training-and-certification/Learning_Paths/learning-paths_ml-developer_march2020.b7bca6ba2cf5ffe563707f849ef636b3f6d5d91f.png" width="800" height="500">
 
<img src="https://d1.awsstatic.com/training-and-certification/Learning_Paths/learning-paths_ml-developer_march2020.b7bca6ba2cf5ffe563707f849ef636b3f6d5d91f.png" width="800" height="500">
 +
 +
= AWS Summit New York City 2023 =
 +
<youtube>1PkABWCJINM</youtube>

Latest revision as of 15:10, 2 June 2024

YouTube ... Quora ...Google search ...Google News ...Bing News

_______________________________________________

Inferentia

ChatGPT AWS Inferentia is a custom-designed machine learning inference chip developed by Amazon Web Services (AWS) to accelerate deep learning workloads. The chip is specifically optimized for high performance, low latency, and cost-effective inference, which is the process of running trained machine learning models to make predictions or classifications. By using AWS Inferentia, organizations can achieve faster and more cost-effective deployment of machine learning models for a variety of applications, including image and speech recognition, natural language processing, and recommendation engines. Key features and benefits of AWS Inferentia include:

  • High Performance: Inferentia delivers high throughput and low latency, making it ideal for real-time applications. It supports multiple machine learning frameworks such as TensorFlow, PyTorch, and Apache MXNet.
  • Cost Efficiency: By providing a dedicated hardware solution for inference, Inferentia can reduce the cost of inference operations compared to using general-purpose CPUs or GPUs.
  • Compatibility: AWS Inferentia is integrated with Amazon SageMaker, AWS's fully managed machine learning service, and supports models trained on popular frameworks. This makes it easier for developers to deploy their existing models on Inferentia-based instances.
  • Scalability: It can be scaled to handle large-scale machine learning workloads, allowing users to deploy multiple models simultaneously or to serve a high volume of inference requests.
  • Availability: Inferentia-powered instances, such as the Inf1 instance type, are available on Amazon EC2. These instances are designed to provide optimal performance for inference applications.


Integrated Components/Technologies

Libraries & Frameworks


Training


Business Decision Maker... ...Data Platform Engineer... ... Data Scientist.... ..... .... Developer

icon_ml-decision-maker.da2f4225ee7b53f91fbc6e1ae08cbf4c13777a0e.png icon_ml-data-platform-engineer.7cf26a6e863a1286e1f94c54a2c6493a68a6bb69.png icon_data-scientist.0ec69c78a7db519f20247c3960f342c1325644dc.png icon_ml-developer.60695054f17ef19224ba3549d901ab640738a6e4.png


Business Decision Maker

Data Platform Engineer

Data Scientist

Developer

AWS Summit New York City 2023