Difference between revisions of "Other Challenges"
(Created page with "[https://www.youtube.com/results?search_query=obsticles+challenges+Power++Limits+Deep+Learning+ YouTube search...] <youtube>https://www.youtube.com/watch?v=1_KhJv0Em5Y</youtu...") |
m |
||
| (30 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
| − | + | {{#seo: | |
| + | |title=PRIMO.ai | ||
| + | |titlemode=append | ||
| + | |keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | ||
| − | < | + | <!-- Google tag (gtag.js) --> |
| − | < | + | <script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script> |
| − | + | <script> | |
| − | <youtube> | + | window.dataLayer = window.dataLayer || []; |
| − | <youtube> | + | function gtag(){dataLayer.push(arguments);} |
| − | <youtube> | + | gtag('js', new Date()); |
| + | |||
| + | gtag('config', 'G-4GCWLBVJ7T'); | ||
| + | </script> | ||
| + | }} | ||
| + | [https://www.youtube.com/results?search_query=obstacles+challenges+Deep+Learning YouTube search...] | ||
| + | [https://www.google.com/search?q=obstacles+challenges+deep+machine+learning+ML ...Google search] | ||
| + | |||
| + | * [[Risk, Compliance and Regulation]] ... [[Ethics]] ... [[Privacy]] ... [[Law]] ... [[AI Governance]] ... [[AI Verification and Validation]] | ||
| + | * [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]] | ||
| + | * [[Immersive Reality]] ... [[Metaverse]] ... [[Omniverse]] ... [[Transhumanism]] ... [[Religion]] | ||
| + | * [[Telecommunications]] ... [[Computer Networks]] ... [[Telecommunications#5G|5G]] ... [[Satellite#Satellite Communications|Satellite Communications]] ... [[Quantum Communications]] ... [[Agents#Communication | Communication Agents]] ... [[Smart Cities]] ... [[Digital Twin]] ... [[Internet of Things (IoT)]] | ||
| + | * [[Video/Image]] ... [[Vision]] ... [[Enhancement]] ... [[Fake]] ... [[Reconstruction]] ... [[Colorize]] ... [[Occlusions]] ... [[Predict image]] ... [[Image/Video Transfer Learning]] | ||
| + | * [[Symbiotic Intelligence]] ... [[Bio-inspired Computing]] ... [[Neuroscience]] ... [[Connecting Brains]] ... [[Nanobots#Brain Interface using AI and Nanobots|Nanobots]] ... [[Molecular Artificial Intelligence (AI)|Molecular]] ... [[Neuromorphic Computing|Neuromorphic]] ... [[Evolutionary Computation / Genetic Algorithms| Evolutionary/Genetic]] | ||
| + | * [[Energy]] Consumption | ||
| + | |||
| + | |||
| + | == The Wall - Deep Learning == | ||
| + | Deep neural nets are huge and bulky inefficient creatures that allow you to effectively solve a learning problem by getting huge amounts of data and a super computer. They currently trade efficiency for brute force almost every time. | ||
| + | |||
| + | |||
| + | [https://towardsdatascience.com/recent-advances-for-a-better-understanding-of-deep-learning-part-i-5ce34d1cc914 Towards Theoretical Understanding of Deep Learning | Sanjeev Arora] | ||
| + | |||
| + | * Non Convex Optimization: How can we understand the highly non-convex loss function associated with deep neural networks? Why does stochastic gradient descent even converge? | ||
| + | * Overparametrization and Generalization: In classical statistical theory, generalization depends on the number of parameters but not in deep learning. Why? Can we find another good measure of generalization? | ||
| + | * Role of Depth: How does depth help a neural network to converge? What is the link between depth and generalization? | ||
| + | * Generative Models: Why do Generative Adversarial Networks (GANs) work so well? What theoretical properties could we use to stabilize them or avoid mode collapse? | ||
| + | |||
| + | |||
| + | |||
| + | |||
| + | |||
| + | <youtube>WTnxE0wjZaM</youtube> | ||
| + | <youtube>CLDisFuDnog</youtube> | ||
| + | <youtube>1_KhJv0Em5Y</youtube> | ||
| + | <youtube>rAQ-0wTavfM</youtube> | ||
| + | <youtube>0tEhw5t6rhc</youtube> | ||
| + | <youtube>kpqPFUu9JvU</youtube> | ||
| + | <youtube>Q7ifcUuMZvk</youtube> | ||
| + | <youtube>v3QGgtmAZTE</youtube> | ||
| + | <youtube>kYEUkpHpOKA</youtube> | ||
| + | <youtube>rTawFwUvnLE</youtube> | ||
| + | |||
| + | == The Expert == | ||
| + | |||
| + | <youtube>BKorP55Aqvg</youtube> | ||
Latest revision as of 18:18, 19 March 2024
YouTube search... ...Google search
- Risk, Compliance and Regulation ... Ethics ... Privacy ... Law ... AI Governance ... AI Verification and Validation
- Backpropagation ... FFNN ... Forward-Forward ... Activation Functions ...Softmax ... Loss ... Boosting ... Gradient Descent ... Hyperparameter ... Manifold Hypothesis ... PCA
- Immersive Reality ... Metaverse ... Omniverse ... Transhumanism ... Religion
- Telecommunications ... Computer Networks ... 5G ... Satellite Communications ... Quantum Communications ... Communication Agents ... Smart Cities ... Digital Twin ... Internet of Things (IoT)
- Video/Image ... Vision ... Enhancement ... Fake ... Reconstruction ... Colorize ... Occlusions ... Predict image ... Image/Video Transfer Learning
- Symbiotic Intelligence ... Bio-inspired Computing ... Neuroscience ... Connecting Brains ... Nanobots ... Molecular ... Neuromorphic ... Evolutionary/Genetic
- Energy Consumption
The Wall - Deep Learning
Deep neural nets are huge and bulky inefficient creatures that allow you to effectively solve a learning problem by getting huge amounts of data and a super computer. They currently trade efficiency for brute force almost every time.
Towards Theoretical Understanding of Deep Learning | Sanjeev Arora
- Non Convex Optimization: How can we understand the highly non-convex loss function associated with deep neural networks? Why does stochastic gradient descent even converge?
- Overparametrization and Generalization: In classical statistical theory, generalization depends on the number of parameters but not in deep learning. Why? Can we find another good measure of generalization?
- Role of Depth: How does depth help a neural network to converge? What is the link between depth and generalization?
- Generative Models: Why do Generative Adversarial Networks (GANs) work so well? What theoretical properties could we use to stabilize them or avoid mode collapse?
The Expert