Difference between revisions of "Other Challenges"

From
Jump to: navigation, search
m
m
Line 18: Line 18:
  
 
* [[Risk, Compliance and Regulation]] ... [[Ethics]] ... [[Privacy]] ... [[Law]] ... [[AI Governance]] ... [[AI Verification and Validation]]
 
* [[Risk, Compliance and Regulation]] ... [[Ethics]] ... [[Privacy]] ... [[Law]] ... [[AI Governance]] ... [[AI Verification and Validation]]
* [[Backpropagation]]
+
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
* [[Gradient Descent Optimization & Challenges]]
 
 
* [[Immersive Reality]] ... [[Metaverse]] ... [[Digital Twin]] ... [[Internet of Things (IoT)]] ... [[Transhumanism]]
 
* [[Immersive Reality]] ... [[Metaverse]] ... [[Digital Twin]] ... [[Internet of Things (IoT)]] ... [[Transhumanism]]
 
* [[Video/Image]] ... [[Vision]] ... [[Enhancement]] ... [[Fake]] ... [[Reconstruction]] ... [[Colorize]] ... [[Occlusions]] ... [[Predict image]] ... [[Image/Video Transfer Learning]]
 
* [[Video/Image]] ... [[Vision]] ... [[Enhancement]] ... [[Fake]] ... [[Reconstruction]] ... [[Colorize]] ... [[Occlusions]] ... [[Predict image]] ... [[Image/Video Transfer Learning]]

Revision as of 21:00, 11 July 2023

YouTube search... ...Google search


The Wall - Deep Learning

Deep neural nets are huge and bulky inefficient creatures that allow you to effectively solve a learning problem by getting huge amounts of data and a super computer. They currently trade efficiency for brute force almost every time.


Towards Theoretical Understanding of Deep Learning | Sanjeev Arora

  • Non Convex Optimization: How can we understand the highly non-convex loss function associated with deep neural networks? Why does stochastic gradient descent even converge?
  • Overparametrization and Generalization: In classical statistical theory, generalization depends on the number of parameters but not in deep learning. Why? Can we find another good measure of generalization?
  • Role of Depth: How does depth help a neural network to converge? What is the link between depth and generalization?
  • Generative Models: Why do Generative Adversarial Networks (GANs) work so well? What theoretical properties could we use to stabilize them or avoid mode collapse?



The Expert