Difference between revisions of "Hypernetworks"
m |
m (Text replacement - "http:" to "https:") |
||
Line 1: | Line 1: | ||
− | [ | + | [https://www.youtube.com/results?search_query=hyperparameter+deep+learning+tuning+optimization+ai YouTube search...] |
− | [ | + | [https://www.google.com/search?q=hyperparameter+optimization+deep+machine+learning+ML+ai ...Google search] |
* [[Algorithm Administration]] | * [[Algorithm Administration]] | ||
− | * [ | + | * [https://www.quantamagazine.org/researchers-build-ai-that-builds-ai-20220125/ Researchers Build AI That Builds AI] By using hypernetworks, researchers can now preemptively fine-tune artificial neural networks, saving some of the time and expense of training |
− | A hypernetwork is a network that generates the weights of another network (Ha et al., 2017). The hypernetworks capture the shared information, while the generated task conditional adapters and layer normalization allow the model to adapt to each individual task to reduce negative task interference. [ | + | A hypernetwork is a network that generates the weights of another network (Ha et al., 2017). The hypernetworks capture the shared information, while the generated task conditional adapters and layer normalization allow the model to adapt to each individual task to reduce negative task interference. [https://aclanthology.org/2021.acl-long.47.pdf Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks R.K. Mahabadi, S. Ruder, M. Dehghani, & J. Hernderson] |
<youtube>KY9DoutzH6k</youtube> | <youtube>KY9DoutzH6k</youtube> | ||
<youtube>k9RURcGL_mg</youtube> | <youtube>k9RURcGL_mg</youtube> |
Revision as of 19:01, 28 March 2023
YouTube search... ...Google search
- Algorithm Administration
- Researchers Build AI That Builds AI By using hypernetworks, researchers can now preemptively fine-tune artificial neural networks, saving some of the time and expense of training
A hypernetwork is a network that generates the weights of another network (Ha et al., 2017). The hypernetworks capture the shared information, while the generated task conditional adapters and layer normalization allow the model to adapt to each individual task to reduce negative task interference. Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks R.K. Mahabadi, S. Ruder, M. Dehghani, & J. Hernderson