Difference between revisions of "Other Challenges"
| Line 12: | Line 12: | ||
* [[Gradient Descent Optimization & Challenges]] | * [[Gradient Descent Optimization & Challenges]] | ||
* [[AI Verification and Validation]] | * [[AI Verification and Validation]] | ||
| − | * [[Digital | + | * [[Digital Twin]] |
* [[Occlusions]] | * [[Occlusions]] | ||
* [[Bio-inspired Computing]] | * [[Bio-inspired Computing]] | ||
Revision as of 21:01, 23 August 2019
YouTube search... ...Google search
- Privacy in Data Science
- Backpropagation
- Gradient Descent Optimization & Challenges
- AI Verification and Validation
- Digital Twin
- Occlusions
- Bio-inspired Computing
- Energy Consumption
- Ethics Standards
The Wall - Deep Learning
Deep neural nets are huge and bulky inefficient creatures that allow you to effectively solve a learning problem by getting huge amounts of data and a super computer. They currently trade efficiency for brute force almost every time.
Towards Theoretical Understanding of Deep Learning | Sanjeev Arora
- Non Convex Optimization: How can we understand the highly non-convex loss function associated with deep neural networks? Why does stochastic gradient descent even converge?
- Overparametrization and Generalization: In classical statistical theory, generalization depends on the number of parameters but not in deep learning. Why? Can we find another good measure of generalization?
- Role of Depth: How does depth help a neural network to converge? What is the link between depth and generalization?
- Generative Models: Why do Generative Adversarial Networks (GANs) work so well? What theoretical properties could we use to stabilize them or avoid mode collapse?
The Expert