Difference between revisions of "Hypernetworks"

From
Jump to: navigation, search
m
m
Line 2: Line 2:
 
[http://www.google.com/search?q=hyperparameter+optimization+deep+machine+learning+ML+ai ...Google search]
 
[http://www.google.com/search?q=hyperparameter+optimization+deep+machine+learning+ML+ai ...Google search]
  
* [[Gradient Descent Optimization & Challenges]]
 
 
* [[Algorithm Administration]]
 
* [[Algorithm Administration]]
 
* [http://www.quantamagazine.org/researchers-build-ai-that-builds-ai-20220125/ Researchers Build AI That Builds AI] By using hypernetworks, researchers can now preemptively fine-tune artificial neural networks, saving some of the time and expense of training
 
* [http://www.quantamagazine.org/researchers-build-ai-that-builds-ai-20220125/ Researchers Build AI That Builds AI] By using hypernetworks, researchers can now preemptively fine-tune artificial neural networks, saving some of the time and expense of training

Revision as of 00:33, 11 February 2022

YouTube search... ...Google search

A hypernetwork is a network that generates the weights of another network (Ha et al., 2017). The hypernetworks capture the shared information, while the generated task conditional adapters and layer normalization allow the model to adapt to each individual task to reduce negative task interference. Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks R.K. Mahabadi, S. Ruder, M. Dehghani, & J. Hernderson