Difference between revisions of "PRIMO.ai"
(→Natural Language) |
|||
(35 intermediate revisions by the same user not shown) | |||
Line 4: | Line 4: | ||
|keywords=artificial, intelligence, machine, learning, models, algorithms, cybersecurity, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS | |keywords=artificial, intelligence, machine, learning, models, algorithms, cybersecurity, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS | ||
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | ||
+ | |||
+ | <script async src="https://www.googletagmanager.com/gtag/js?id=UA-101589255-2"></script> | ||
+ | <!-- Global site tag (gtag.js) - Google Analytics --> | ||
+ | <script> | ||
+ | window.dataLayer = window.dataLayer || []; | ||
+ | function gtag(){dataLayer.push(arguments);} | ||
+ | gtag('js', new Date()); | ||
+ | |||
+ | gtag('config', 'UA-101589255-2'); | ||
+ | </script> | ||
}} | }} | ||
− | |||
On {{LOCALDAYNAME}} {{LOCALMONTHNAME}} {{LOCALDAY}}, {{LOCALYEAR}} PRIMO.ai has {{NUMBEROFPAGES}} pages | On {{LOCALDAYNAME}} {{LOCALMONTHNAME}} {{LOCALDAY}}, {{LOCALYEAR}} PRIMO.ai has {{NUMBEROFPAGES}} pages | ||
Line 30: | Line 39: | ||
* [[Google AIY Projects Program]] - Do-it-yourself artificial intelligence | * [[Google AIY Projects Program]] - Do-it-yourself artificial intelligence | ||
* [http://www.nvidia.com/en-us/research/ai-playground/ NVIDIA Playground] | * [http://www.nvidia.com/en-us/research/ai-playground/ NVIDIA Playground] | ||
− | + | ||
* [[Competitions]] | * [[Competitions]] | ||
+ | <i>Try GPT-2...</i> | ||
+ | * [http://talktotransformer.com/ Talk to Transformer] - completes your text. | [http://adamdking.com/ Adam D King], [http://huggingface.co/ Hugging Face] and [http://openai.com/ OpenAI] | ||
+ | * [http://colab.research.google.com/github/nickwalton/AIDungeon/blob/master/AIDungeon_2.ipynb AI Dungeon 2] AI generated text adventure | ||
+ | |||
+ | <i>.. more Natural Language Processing (NLP) fun...</i> | ||
+ | * [http://corenlp.run/ CoreNLP - see NLP parsing techniques by pasting your text | Stanford] | ||
+ | * [http://nlp.stanford.edu:8080/sentiment/rntnDemo.html Sentiment Treebank Analysis Demo] | ||
=== How to... === | === How to... === | ||
Line 45: | Line 61: | ||
= Information Analysis = | = Information Analysis = | ||
* [[Framing Context]] | * [[Framing Context]] | ||
− | * [[Datasets]] | + | * [[Datasets]] & [[Benchmarks]] |
* [[Imbalanced Data]] | * [[Imbalanced Data]] | ||
* [[Data Preprocessing]] | * [[Data Preprocessing]] | ||
Line 65: | Line 81: | ||
= [[Algorithms]] = | = [[Algorithms]] = | ||
* [[Model Zoos]] | * [[Model Zoos]] | ||
+ | * [[Graphical Tools for Modeling AI Components]] | ||
== Predict values - [[Regression]] == | == Predict values - [[Regression]] == | ||
Line 84: | Line 101: | ||
== Classification [[...predict categories]] == | == Classification [[...predict categories]] == | ||
− | * [[Supervised]] | + | * <span id="Supervised"></span>[[Supervised]] |
** [[Naive Bayes]] | ** [[Naive Bayes]] | ||
** [[K-Nearest Neighbors (KNN)]] | ** [[K-Nearest Neighbors (KNN)]] | ||
Line 101: | Line 118: | ||
** [[Apriori, Frequent Pattern (FP) Growth, Association Rules/Analysis]] | ** [[Apriori, Frequent Pattern (FP) Growth, Association Rules/Analysis]] | ||
** [[Markov Model (Chain, Discrete Time, Continuous Time, Hidden)]] | ** [[Markov Model (Chain, Discrete Time, Continuous Time, Hidden)]] | ||
− | * [[Unsupervised]] | + | * <span id="Unsupervised"></span>[[Unsupervised]] |
** [[Radial Basis Function Network (RBFN)]] | ** [[Radial Basis Function Network (RBFN)]] | ||
− | ** [[Autoencoder (AE) / Encoder-Decoder]] | + | ** <span id="Self-Supervised"></span>[[Self-Supervised]] |
− | ** [[(Stacked) Denoising Autoencoder (DAE)]] | + | *** [[Autoencoder (AE) / Encoder-Decoder]] |
− | ** [[Sparse Autoencoder (SAE)]] | + | *** [[(Stacked) Denoising Autoencoder (DAE)]] |
+ | *** [[Sparse Autoencoder (SAE)]] | ||
== [[Recommendation]] == | == [[Recommendation]] == | ||
Line 175: | Line 193: | ||
* [[Quantum Generative Adversarial Learning (QuGAN - QGAN)]] | * [[Quantum Generative Adversarial Learning (QuGAN - QGAN)]] | ||
− | == [[Semi-Supervised]] == | + | == <span id="Semi-Supervised"></span>[[Semi-Supervised]] == |
− | In many practical situations, the cost to label is quite high, since it requires skilled human experts to do that. So, in the absence of labels in the majority of the observations but present in few, semi-supervised algorithms are the best candidates for the model building. These methods exploit the idea that even though the group memberships of the unlabeled data are unknown, this data carries important information about the group parameters. | + | In many practical situations, the cost to label is quite high, since it requires skilled human experts to do that. So, in the absence of labels in the majority of the observations but present in few, semi-supervised algorithms are the best candidates for the model building. These methods exploit the idea that even though the group memberships of the unlabeled data are unknown, this data carries important information about the group parameters. Reference: [[Learning Techniques]] |
* [[Semi-Supervised Learning with Generative Adversarial Network (SSL-GAN)]] | * [[Semi-Supervised Learning with Generative Adversarial Network (SSL-GAN)]] | ||
* [[Context-Conditional Generative Adversarial Network (CC-GAN)]] | * [[Context-Conditional Generative Adversarial Network (CC-GAN)]] | ||
− | == Natural Language == | + | == <span id="Natural Language"></span>Natural Language == |
− | * [[Natural Language Processing (NLP)]] involves speech recognition, (speech) translation, understanding (semantic parsing) complete sentences, understanding synonyms of matching words, and sentiment analysis | + | * [[Natural Language Processing (NLP)]] involves speech recognition, (speech) translation, understanding (semantic parsing) complete sentences, understanding synonyms of matching words, and sentiment analysis |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | == [[Reinforcement Learning (RL)]] == | + | == <span id="Reinforcement Learning (RL)"></span>[[Reinforcement Learning (RL)]] == |
an algorithm receives a delayed reward in the next time step to evaluate its previous action. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. In combination with Neural Networks it is capable of solving more complex tasks. [[Policy Gradient (PG)]] methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by [[Gradient Descent Optimization & Challenges |gradient descent]]. | an algorithm receives a delayed reward in the next time step to evaluate its previous action. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. In combination with Neural Networks it is capable of solving more complex tasks. [[Policy Gradient (PG)]] methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by [[Gradient Descent Optimization & Challenges |gradient descent]]. | ||
Line 286: | Line 259: | ||
* [[Local Features]] | * [[Local Features]] | ||
− | === [[ | + | === [[Learning Techniques]] === |
* [[Text Transfer Learning]] | * [[Text Transfer Learning]] | ||
* [[Image/Video Transfer Learning]] | * [[Image/Video Transfer Learning]] | ||
Line 298: | Line 271: | ||
* [[Lifelong Learning]] - Catastrophic Forgetting Challenge | * [[Lifelong Learning]] - Catastrophic Forgetting Challenge | ||
* [[Neural Structured Learning (NSL)]] | * [[Neural Structured Learning (NSL)]] | ||
+ | * [[Meta-Learning]] | ||
=== Opportunities & Challenges === | === Opportunities & Challenges === | ||
Line 314: | Line 288: | ||
** [[Messaging & Routing]] | ** [[Messaging & Routing]] | ||
** [[Pipeline]]s | ** [[Pipeline]]s | ||
− | ** [[Federated]] | + | ** [[Federated]] Learning |
+ | ** [[Distributed]] Learning | ||
** [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU]] | ** [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU]] | ||
* [[Integrity Forensics]] | * [[Integrity Forensics]] | ||
Line 364: | Line 339: | ||
* [[Intel]] | * [[Intel]] | ||
* [[Apple]] | * [[Apple]] | ||
+ | * [[IBM]] | ||
+ | |||
=== ... and other leading organizations === | === ... and other leading organizations === | ||
* [http://allenai.org/ Allen Institute for Artificial Intelligence, or AI2] | * [http://allenai.org/ Allen Institute for Artificial Intelligence, or AI2] |
Revision as of 21:08, 22 May 2020
On Wednesday April 17, 2024 PRIMO.ai has 736 pages
Contents
- 1 Getting Started
- 2 Information Analysis
- 3 Algorithms
- 3.1 Predict values - Regression
- 3.2 Classification ...predict categories
- 3.3 Recommendation
- 3.4 Clustering - Continuous - Dimensional Reduction
- 3.5 Convolutional
- 3.6 Graph
- 3.7 Sequence / Time
- 3.8 Competitive
- 3.9 Semi-Supervised
- 3.10 Natural Language
- 3.11 Reinforcement Learning (RL)
- 3.12 Neuro-Symbolic
- 3.13 Other
- 4 Techniques
- 5 Development & Implementation
Getting Started
Overview
Background
AI Breakthroughs
AI Fun
- Google AI Experiments
- TensorFlow Playground
- TensorFlow.js Demos
- Google AIY Projects Program - Do-it-yourself artificial intelligence
- NVIDIA Playground
Try GPT-2...
- Talk to Transformer - completes your text. | Adam D King, Hugging Face and OpenAI
- AI Dungeon 2 AI generated text adventure
.. more Natural Language Processing (NLP) fun...
- CoreNLP - see NLP parsing techniques by pasting your text | Stanford
- Sentiment Treebank Analysis Demo
How to...
- AI Solver for determining possible algorithms for your needs
- Strategy & Tactics for developing applications
- Checklists for ensuring consistency and completeness
Forward Thinking
Information Analysis
- Framing Context
- Datasets & Benchmarks
- Imbalanced Data
- Data Preprocessing
- Data Augmentation, Data Labeling, and Auto-Tagging
- Feature Exploration/Learning
- Batch Norm(alization) & Standardization
- Hyperparameters
- Zero Padding
- Train, Validate, and Test
- Model Assessment:
- Visualization
- Master Data Management (MDM) / Feature Store / Data Lineage / Data Catalog
- Data Interoperability
Algorithms
Predict values - Regression
- Linear Regression
- Ridge Regression
- Lasso Regression
- Elastic Net Regression
- Bayesian Linear Regression
- Logistic Regression (LR)
- Support Vector Regression (SVR)
- Ordinal Regression
- Poisson Regression
- Tree-based...
- General Regression Neural Network (GRNN)
- One-class Support Vector Machine (SVM)
- Gradient Boosting Machine (GBM)
Classification ...predict categories
- Supervised
- Naive Bayes
- K-Nearest Neighbors (KNN)
- Perceptron (P) ...and Multi-layer Perceptron (MLP)
- Feed Forward Neural Network (FF or FFNN)
- Artificial Neural Network (ANN)
- Deep Learning - Deep Neural Network (DNN)
- Kernel Approximation - Kernel Trick
- Logistic Regression (LR)
- Softmax Regression; Multinominal Logistic Regression
- Tree-based...
- Apriori, Frequent Pattern (FP) Growth, Association Rules/Analysis
- Markov Model (Chain, Discrete Time, Continuous Time, Hidden)
- Unsupervised
Recommendation
Clustering - Continuous - Dimensional Reduction
- Singular Value Decomposition (SVD)
- Principal Component Analysis (PCA)
- K-Means
- Fuzzy C-Means (FCM)
- K-Modes
- Association Rule Learning
- Mean-Shift Clustering
- Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
- Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)
- Restricted Boltzmann Machine (RBM)
- Variational Autoencoder (VAE)
- Biclustering
- Multidimensional Scaling (MDS)
Hierarchical
- Hierarchical Cluster Analysis (HCA)
- Hierarchical Clustering; Agglomerative (HAC) & Divisive (HDC)
- Hierarchical Temporal Memory (HTM) Time
- Mixture Models; Gaussian
Convolutional
Deconvolutional
Graph
- includes social networks, sensor networks, the entire Internet, 3D Objects (Point Cloud)
- Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning
- Point Cloud
- A hierarchical RNN-based model to predict scene graphs for images
- A multi-granularity reasoning framework for social relation recognition
- Neural Structured Learning (NSL)
Sequence / Time
- Sequence to Sequence (Seq2Seq)
- End-to-End Speech
- Neural Turing Machine
- Recurrent Neural Network (RNN)
- (Tree) Recursive Neural (Tensor) Network (RNTN)
Time
- Temporal Difference (TD) Learning
- Predict values
Spatialtemporal
Spatial-Temporal Dynamic Network (STDN)
Competitive
- Generative Adversarial Network (GAN)
- Conditional Adversarial Architecture (CAA)
- Kohonen Network (KN)/Self Organizing Maps (SOM)
- Quantum Generative Adversarial Learning (QuGAN - QGAN)
Semi-Supervised
In many practical situations, the cost to label is quite high, since it requires skilled human experts to do that. So, in the absence of labels in the majority of the observations but present in few, semi-supervised algorithms are the best candidates for the model building. These methods exploit the idea that even though the group memberships of the unlabeled data are unknown, this data carries important information about the group parameters. Reference: Learning Techniques
- Semi-Supervised Learning with Generative Adversarial Network (SSL-GAN)
- Context-Conditional Generative Adversarial Network (CC-GAN)
Natural Language
- Natural Language Processing (NLP) involves speech recognition, (speech) translation, understanding (semantic parsing) complete sentences, understanding synonyms of matching words, and sentiment analysis
Reinforcement Learning (RL)
an algorithm receives a delayed reward in the next time step to evaluate its previous action. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. In combination with Neural Networks it is capable of solving more complex tasks. Policy Gradient (PG) methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by gradient descent.
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- Q Learning
- State-Action-Reward-State-Action (SARSA)
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Deep Q Network (DQN)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Hierarchical Reinforcement Learning (HRL)
Neuro-Symbolic
the “connectionists” seek to construct artificial neural networks, inspired by biology, to learn about the world, while the “symbolists” seek to build intelligent machines by coding in logical rules and representations of the world. Neuro-Symbolic combines the fruits of group.
Other
Techniques
- Math for Intelligence
- Arxiv Sanity Preserver to accelerate research
Methods & Concepts
- Backpropagation
- Overfitting Challenge
- Dimensional Reduction; identification - what influences an observed outcome
- Activation Functions
- Memory
- Memory Networks
- Attention Mechanism/Transformer Model
- Transformer-XL
- Multiclassifiers; Ensembles and Hybrids; Bagging, Boosting, and Stacking
- Optimizers
- Neural Network Pruning
- Repositories & Other Algorithms
- DAWNBench An End-to-End Deep Learning Benchmark and Competition
- Knowledge Graphs
- Quantization
- Causation vs. Correlation
- Object Detection; Faster R-CNN, YOLO, SSD
- Deep Features
- Local Features
Learning Techniques
- Text Transfer Learning
- Image/Video Transfer Learning
- Few Shot Learning
- Transfer Learning a model trained on one task is re-purposed on a second related task
- Ensemble Learning
- Multi-Task Learning (MTL)
- Apprenticeship Learning - Inverse Reinforcement Learning (IRL)
- Imitation Learning
- Simulated Environment Learning
- Lifelong Learning - Catastrophic Forgetting Challenge
- Neural Structured Learning (NSL)
- Meta-Learning
Opportunities & Challenges
- Generative Modeling
- Inside Out - Curious Optimistic Reasoning
- Nature
- Connecting Brains
- Architectures
- Integrity Forensics
- Other Challenges in Artificial Intelligence
Development & Implementation
- Building Your Environment
- Pipelines
- Service Capabilities
- AI Marketplace & Toolkit/Model Interoperability
No Coding
- Automated Machine Learning (AML) - AutoML
- Neural Architecture Search (NAS) Algorithm
- Other codeless options, Code Generators, Drag n' Drop
Coding
Libraries & Frameworks
TensorFlow
- TensorBoard
- TensorFlow Playground
- TensorFlow.js Demos
- TensorFlow.js
- TensorFlow Lite
- TensorFlow Serving
- Related...
Tooling
- Model Search
- Model Monitoring
- Notebooks; Jupyter and R Markdown
Platforms: Machine Learning as a Service (MLaaS)
... and other leading organizations
If you get a 502 or 503 error please try the webpage again, as your message is visiting the island which the server is located, perhaps deciding to relax in the Sun before returning. Thank you.