Difference between revisions of "Energy"
m |
m |
||
(15 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
|title=PRIMO.ai | |title=PRIMO.ai | ||
|titlemode=append | |titlemode=append | ||
− | |keywords=artificial, intelligence, machine, learning, models | + | |keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools |
− | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | + | |
+ | <!-- Google tag (gtag.js) --> | ||
+ | <script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script> | ||
+ | <script> | ||
+ | window.dataLayer = window.dataLayer || []; | ||
+ | function gtag(){dataLayer.push(arguments);} | ||
+ | gtag('js', new Date()); | ||
+ | |||
+ | gtag('config', 'G-4GCWLBVJ7T'); | ||
+ | </script> | ||
}} | }} | ||
− | [ | + | [https://www.youtube.com/results?search_query=energy+consumption+policy+considerations+Efficient+Deep+Learning+ YouTube search...] |
− | [ | + | [https://www.google.com/search?q=energy+consumption+policy+considerations+Efficient+deep+machine+learning+ML+artificial+intelligence ...Google search] |
+ | * [[Energy-based Model (EBN)]] | ||
* [[Case Studies]] | * [[Case Studies]] | ||
** [[Power (Management)]] | ** [[Power (Management)]] | ||
− | *** [[ | + | *** [[Astronomy#Sun / Solar|Sun / Solar]] |
+ | ** [[Chemistry]] | ||
* [[Other Challenges]] in Artificial Intelligence | * [[Other Challenges]] in Artificial Intelligence | ||
− | * [ | + | * [https://drive.google.com/file/d/1v3TxkqPuzvRfiV_RVyRTTFbHl1pZq7Ab/view Energy and Policy Considerations for Deep Learning in NLP | E. Strubell, A. Ganesh, and A. McCallum - College of Information and Computer Sciences & University of Massachusetts Amherst] |
− | * [ | + | * [https://www.google.com/search?q=Energy+Efficient+Machine+Learning+and+Cognitive+Computing Energy Efficient Machine Learning and Cognitive Computing] |
− | * [ | + | * [https://www.wired.com/story/ai-is-throwing-battery-development-into-overdrive/ AI Is Throwing Battery Development Into Overdrive | Daniel Oberhaus - Wired] ... Improving batteries has always been hampered by slow experimentation and discovery processes. Machine learning is speeding it up by orders of magnitude. |
+ | * [https://www.jhunewsletter.com/article/2021/11/spiral-center-uses-artificial-intelligence-to-make-solar-energy-cheaper SPIRAL Center uses artificial intelligence to make solar energy cheaper | Zachary Bahar - Johns Hopkins News-Letter] | ||
+ | * [https://www.livescience.com/ai-controls-hydrogen-plasmas-nuclear-fusion Nuclear fusion is one step closer with new AI breakthrough | Tom Metcalfe - Livescience] | ||
+ | * [https://techxplore.com/news/2023-03-deep-efficiently-electric-grids.html Using deep learning to develop a forecasting model for efficiently managing electric grids | Chung-Ang University] | ||
+ | |||
− | |||
{|<!-- T --> | {|<!-- T --> | ||
Line 24: | Line 38: | ||
<youtube>8Qa0E1jdkrE</youtube> | <youtube>8Qa0E1jdkrE</youtube> | ||
<b>Energy-Efficient Deep Learning: Challenges and Opportunities | <b>Energy-Efficient Deep Learning: Challenges and Opportunities | ||
− | </b><br>This talk will describe methods to enable energy-efficient processing for deep learning, specifically convolutional neural networks (CNN), which is the cornerstone of many deep-learning algorithms. Deep learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). We will give a short overview of the key concepts in CNNs, discuss its challenges particularly in the embedded space, and highlight various opportunities that can help to address these challenges at various levels of design ranging from architecture, implementation-friendly algorithms, and advanced technologies (including memories and sensors). [ | + | </b><br>This talk will describe methods to enable energy-efficient processing for deep learning, specifically convolutional neural networks (CNN), which is the cornerstone of many deep-learning algorithms. Deep learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). We will give a short overview of the key concepts in CNNs, discuss its challenges particularly in the embedded space, and highlight various opportunities that can help to address these challenges at various levels of design ranging from architecture, implementation-friendly algorithms, and advanced technologies (including memories and sensors). [https://www.rle.mit.edu/eems/wp-content/uploads/2018/04/Energy-Efficient-Deep-Learning-SSCS-DL-Sze.pdf Slides] |
|} | |} | ||
|<!-- M --> | |<!-- M --> | ||
Line 32: | Line 46: | ||
<youtube>WN7VI0h7kbU</youtube> | <youtube>WN7VI0h7kbU</youtube> | ||
<b>Advances in Energy Efficiency Through Cloud and Machine Learning | <b>Advances in Energy Efficiency Through Cloud and Machine Learning | ||
− | </b><br>Main Speaker - Urs Hölzle Today, the IT Industry accounts for about 2 percent of total greenhouse gas emissions, comparable to the footprint of air travel. Will IT emission eclipse air travel one day soon? Urs Hölzle thinks the clear answer is “no”: he says IT energy will decrease, and perhaps decrease significantly, over the next decade. Find out why. Hölzle is Senior Vice President of Technical Infrastructure & Google Fellow and oversees the design and operation of the servers, networks, and data centers that power Google's services, as well as the development of the software infrastructure used by Google’s applications. Recorded on 10/20/2017. Series: "Institute for Energy Efficiency" [3/2018] [Show ID: 33271] | + | </b><br>Main Speaker - Urs Hölzle Today, the IT Industry accounts for about 2 percent of total greenhouse gas emissions, comparable to the footprint of air travel. Will IT emission eclipse air travel one day soon? Urs Hölzle thinks the clear answer is “no”: he says IT energy will decrease, and perhaps decrease significantly, over the next decade. Find out why. Hölzle is Senior Vice President of Technical Infrastructure & Google Fellow and oversees the design and operation of the servers, networks, and data centers that power Google's services, as well as the [[development]] of the software infrastructure used by Google’s applications. Recorded on 10/20/2017. Series: "Institute for Energy Efficiency" [3/2018] [Show ID: 33271] |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
Line 48: | Line 62: | ||
|| | || | ||
<youtube>ePCT9d8HI0Y</youtube> | <youtube>ePCT9d8HI0Y</youtube> | ||
− | <b> | + | <b>Deep Learning Montréal @ Autodesk – Deeplite Faster, smaller, energy-efficient Deep Neural Networks |
− | </b><br> | + | </b><br>Ehsan Saboori, Technical Co-founder, Deeplite https://www.deeplite.ai/ [https://www.meetup.com/Deep-Learning-Montreal/events/248338612/ Full details here] |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
Line 56: | Line 70: | ||
{| class="wikitable" style="width: 550px;" | {| class="wikitable" style="width: 550px;" | ||
|| | || | ||
− | <youtube> | + | <youtube>aWk6WcJHDqk</youtube> |
− | <b> | + | <b>Artificial intelligence in Energy - from hype to next big hope |
− | </b><br> | + | </b><br>Artificial intelligence - hyped out of proportion, ahead of their time, and something to fear? In this two-part series, we look at AI in the energy sector and get to grip with the truth beneath the hype, cycles and buzzwords |
|} | |} | ||
|<!-- M --> | |<!-- M --> | ||
Line 83: | Line 97: | ||
<youtube>qIZOhK2Ywz0</youtube> | <youtube>qIZOhK2Ywz0</youtube> | ||
<b>AI Is Helping Supply 1 Billion People in India with Renewable Energy | <b>AI Is Helping Supply 1 Billion People in India with Renewable Energy | ||
− | </b><br>World Economic Forum India is changing the way energy happens by bringing together the power of hardware with software. Watch to see how. | + | </b><br>World Economic Forum India is changing the way energy happens by bringing together the power of hardware with software. Watch to see how. https://www.weforum.org/ |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
Line 108: | Line 122: | ||
= Reducing Energy Consumption of AI = | = Reducing Energy Consumption of AI = | ||
− | |||
{|<!-- T --> | {|<!-- T --> | ||
| valign="top" | | | valign="top" | | ||
Line 115: | Line 128: | ||
<youtube>VzyzKv_LBRw</youtube> | <youtube>VzyzKv_LBRw</youtube> | ||
<b>Saving Energy Consumption With Deep Learning | <b>Saving Energy Consumption With Deep Learning | ||
− | </b><br>Discover how big data, GPUs, and deep learning, can enable smarter decisions on making your building more energy-efficient with AI startup, Verdigris. Explore more about AI & Deep Learning: | + | </b><br>Discover how big data, GPUs, and deep learning, can enable smarter decisions on making your building more energy-efficient with AI startup, Verdigris. Explore more about AI & Deep Learning: https://nvda.ws/2sbWvNm |
+ | |} | ||
+ | |<!-- M --> | ||
+ | | valign="top" | | ||
+ | {| class="wikitable" style="width: 550px;" | ||
+ | || | ||
+ | <youtube>A3p_w7ENefs</youtube> | ||
+ | <b>Energy-Efficient AI | ||
+ | </b><br>Carlos Macian, senior director of innovation for eSilicon EMEA, talks with Semiconductor Engineering about how to improve the efficiency of AI operations by focusing on the individual operations, including data transport, computation and [[memory]]. | ||
+ | |} | ||
+ | |}<!-- B --> | ||
+ | {|<!-- T --> | ||
+ | | valign="top" | | ||
+ | {| class="wikitable" style="width: 550px;" | ||
+ | || | ||
+ | <youtube>kYEUkpHpOKA</youtube> | ||
+ | <b>Efficient Processing for Deep Learning: Challenges and Opportunities | ||
+ | </b><br>Dr. Vivienne Sze, Associate Professor in the Electrical Engineering and Computer Science Department at MIT (www.rle.mit.edu/eems) presents a one-hour webinar, "Efficient Processing for Deep Learning: Challenges and Opportunities," organized by the Embedded Vision Alliance. Deep neural networks (DNNs) are proving very effective for a variety of challenging machine perception tasks. But these algorithms are very computationally demanding. To enable DNNs to be used in practical applications, it’s critical to find efficient ways to implement them. This webinar explores how DNNs are being mapped onto today’s processor architectures, and how both DNN algorithms and specialized processors are evolving to enable improved efficiency. Sze concludes with suggestions on how to evaluate competing processor solutions in order to address your particular application and design requirements. | ||
|} | |} | ||
|<!-- M --> | |<!-- M --> | ||
Line 123: | Line 153: | ||
<youtube>Y0XGSnRrWiU</youtube> | <youtube>Y0XGSnRrWiU</youtube> | ||
<b>Energy-Efficient AI | Vivienne Sze | TEDxMIT | <b>Energy-Efficient AI | Vivienne Sze | TEDxMIT | ||
− | </b><br>Today, most of the processing for Artificial Intelligence (AI) happens in the cloud (i.e., data centers); however, there are many compelling reasons to perform the processing locally on the device (e.g., smartphones or robots) including reducing the dependence on communication infrastructure, preserving data privacy, and reducing reaction time. One of the key limitations of local processing is energy consumption. Researchers are working on various techniques to enable energy-efficient AI, and how energy-efficient AI extends the reach of AI beyond the cloud to enable a wide range of applications from robotics to health care. Vivienne Sze received the B.A.Sc. (Hons) degree in electrical engineering from the University of Toronto, Toronto, ON, Canada, in 2004, and the S.M. and Ph.D. degree in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, MA, in 2006 and 2010 respectively. She received the Jin-Au Kong Outstanding Doctoral Thesis Prize for her Ph.D. thesis in electrical engineering at MIT in 2011. She is an Associate Professor in the Electrical Engineering and Computer Science Department at MIT. Her research interests include energy efficient algorithms and architectures for portable multimedia applications. From September 2010 to July 2013, she was a Member of Technical Staff in the Systems and Applications R&D Center at Texas Instruments (TI), Dallas, TX, where she designed low-power algorithms and architectures for video coding. She also represented TI in the JCT-VC committee of ITU-T and ISO/IEC standards body during the development of High Efficiency Video Coding (HEVC), which received a Primetime Emmy Engineering Award. She co-edited a book entitled High Efficiency Video Coding (HEVC) - Algorithms and Architecture (Springer, 2014). She was a recipient of the 2017 Qualcomm Faculty Award, 2016 Google Faculty Research Award, the 2016 AFOSR Young Investigator Research Program Award, the 2016 3M Non-Tenured Faculty Award, the 2014 DARPA Young Faculty Award, the 2007 DAC/ISSCC Student Design Contest Award and a co-recipient of the 2017 CICC Best Invited Paper Award, the 2016 Micro Top Picks Award and the 2008 A-SSCC Outstanding Design Award. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx | + | </b><br>Today, most of the processing for Artificial Intelligence (AI) happens in the cloud (i.e., data centers); however, there are many compelling reasons to perform the processing locally on the device (e.g., smartphones or robots) including reducing the dependence on communication infrastructure, preserving data privacy, and reducing reaction time. One of the key limitations of local processing is energy consumption. Researchers are working on various techniques to enable energy-efficient AI, and how energy-efficient AI extends the reach of AI beyond the cloud to enable a wide range of applications from robotics to health care. Vivienne Sze received the B.A.Sc. (Hons) degree in electrical engineering from the University of Toronto, Toronto, ON, Canada, in 2004, and the S.M. and Ph.D. degree in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, MA, in 2006 and 2010 respectively. She received the Jin-Au Kong Outstanding Doctoral Thesis Prize for her Ph.D. thesis in electrical engineering at MIT in 2011. She is an Associate Professor in the Electrical Engineering and Computer Science Department at MIT. Her research interests include energy efficient algorithms and architectures for portable multimedia applications. From September 2010 to July 2013, she was a Member of Technical Staff in the Systems and Applications R&D Center at Texas Instruments (TI), Dallas, TX, where she designed low-power algorithms and architectures for [[Video|video]] coding. She also represented TI in the JCT-VC committee of ITU-T and ISO/IEC standards body during the development of High Efficiency [[Video]] Coding (HEVC), which received a Primetime Emmy Engineering Award. She co-edited a book entitled High Efficiency [[Video]] Coding (HEVC) - Algorithms and Architecture (Springer, 2014). She was a recipient of the 2017 Qualcomm Faculty Award, 2016 Google Faculty Research Award, the 2016 AFOSR Young Investigator Research Program Award, the 2016 3M Non-Tenured Faculty Award, the 2014 DARPA Young Faculty Award, the 2007 DAC/ISSCC Student Design Contest Award and a co-recipient of the 2017 CICC Best Invited Paper Award, the 2016 Micro Top Picks Award and the 2008 A-SSCC Outstanding Design Award. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> |
Latest revision as of 21:31, 2 March 2024
YouTube search... ...Google search
- Energy-based Model (EBN)
- Case Studies
- Other Challenges in Artificial Intelligence
- Energy and Policy Considerations for Deep Learning in NLP | E. Strubell, A. Ganesh, and A. McCallum - College of Information and Computer Sciences & University of Massachusetts Amherst
- Energy Efficient Machine Learning and Cognitive Computing
- AI Is Throwing Battery Development Into Overdrive | Daniel Oberhaus - Wired ... Improving batteries has always been hampered by slow experimentation and discovery processes. Machine learning is speeding it up by orders of magnitude.
- SPIRAL Center uses artificial intelligence to make solar energy cheaper | Zachary Bahar - Johns Hopkins News-Letter
- Nuclear fusion is one step closer with new AI breakthrough | Tom Metcalfe - Livescience
- Using deep learning to develop a forecasting model for efficiently managing electric grids | Chung-Ang University
|
|
|
|
|
|
|
|
Solar Energy
|
|
Reducing Energy Consumption of AI
|
|
|
|