Difference between revisions of "PRIMO.ai"
m |
m |
||
(34 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
|title=PRIMO.ai | |title=PRIMO.ai | ||
|titlemode=append | |titlemode=append | ||
− | |keywords=artificial, intelligence, machine, learning, models, algorithms, cybersecurity, data, singularity, moonshot, | + | |keywords=artificial, intelligence, machine, learning, models, algorithms, cybersecurity, data, singularity, moonshot, TensorFlow, Google, NVIDIA, Microsoft, Azure, Amazon, AWS, Facebook |
− | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | + | |description=Helpful resources for your journey with artificial intelligence; machine learning, videos, articles, techniques, courses, profiles, and tools |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-101589255-2"></script> | <script async src="https://www.googletagmanager.com/gtag/js?id=UA-101589255-2"></script> | ||
Line 66: | Line 66: | ||
*** [[Benchmarks]] | *** [[Benchmarks]] | ||
*** [[Data Preprocessing]] | *** [[Data Preprocessing]] | ||
− | **** [[Feature Exploration/Learning]] | + | **** [[Feature Exploration/Learning]] |
− | **** [[Data Quality]] ...[[AI Verification and Validation|validity]], accuracy, [[Data Quality#Data Cleaning|cleaning]], completeness, consistency | + | **** [[Data Quality]] ...[[AI Verification and Validation|validity]], [[Evaluation - Measures#Accuracy|accuracy]], [[Data Quality#Data Cleaning|cleaning]], [[Data Quality#Data Completeness|completeness]], [[Data Quality#Data Consistency|consistency]], [[Data Quality#Data Encoding|encoding]], [[Data Quality#Zero Padding|padding]], [[Data Quality#Data Augmentation, Data Labeling, and Auto-Tagging|augmentation, labeling, auto-tagging]], [[Data Quality#Batch Norm(alization) & Standardization| normalization, standardization]], and [[Data Quality#Imbalanced Data|imbalanced data]] |
*** [[Bias and Variances]] | *** [[Bias and Variances]] | ||
− | *** [[Master Data Management (MDM) | + | *** [[Algorithm Administration#Master Data Management (MDM)|Master Data Management (MDM)]] |
**** [[Natural Language Processing (NLP)#Managed Vocabularies |Managed Vocabularies]] | **** [[Natural Language Processing (NLP)#Managed Vocabularies |Managed Vocabularies]] | ||
**** [[Datasets]] | **** [[Datasets]] | ||
− | *** [[Privacy in Data Science | + | *** [[Privacy]] in Data Science |
*** [[Data Interoperability]] | *** [[Data Interoperability]] | ||
*** [[Excel - Data Analysis]] | *** [[Excel - Data Analysis]] | ||
* [[Visualization]] | * [[Visualization]] | ||
− | * [[Hyperparameter]]s | + | * [[Algorithm Administration#Hyperparameter|Hyperparameter]]s |
* [[Evaluation]] | * [[Evaluation]] | ||
** [[Evaluation - Measures]] | ** [[Evaluation - Measures]] | ||
* [[Train, Validate, and Test]] | * [[Train, Validate, and Test]] | ||
− | = [[Algorithms]] = | + | = <span id="Algorithms"></span>[[Algorithms]] = |
+ | * [[Algorithms]]; the engines of AI | ||
* [[Model Zoos]] | * [[Model Zoos]] | ||
* [[Graphical Tools for Modeling AI Components]] | * [[Graphical Tools for Modeling AI Components]] | ||
Line 89: | Line 90: | ||
* [[Lasso Regression]] | * [[Lasso Regression]] | ||
* [[Elastic Net Regression]] | * [[Elastic Net Regression]] | ||
− | * [[Bayesian Linear Regression]] | + | * [[Bayes#Bayesian Linear Regression|Bayesian Linear Regression]] |
− | * [[Bayesian Deep Learning (BDL)]] | + | * [[Bayes#Bayesian Deep Learning (BDL)|Bayesian Deep Learning (BDL)]] |
* [[Logistic Regression (LR)]] | * [[Logistic Regression (LR)]] | ||
* [[Support Vector Regression (SVR)]] | * [[Support Vector Regression (SVR)]] | ||
Line 104: | Line 105: | ||
== Classification [[...predict categories]] == | == Classification [[...predict categories]] == | ||
* <span id="Supervised"></span>[[Supervised]] | * <span id="Supervised"></span>[[Supervised]] | ||
− | ** [[ | + | ** Naive [[Bayes]] |
** [[K-Nearest Neighbors (KNN)]] | ** [[K-Nearest Neighbors (KNN)]] | ||
** [[Perceptron (P)]] ...and Multi-layer Perceptron (MLP) | ** [[Perceptron (P)]] ...and Multi-layer Perceptron (MLP) | ||
Line 168: | Line 169: | ||
* [[Neural Structured Learning (NSL)]] | * [[Neural Structured Learning (NSL)]] | ||
− | == Sequence / Time == | + | == Sequence / [[Time]] == |
* [[Transformer]] | * [[Transformer]] | ||
** [[Generative Pre-trained Transformer (GPT)]] | ** [[Generative Pre-trained Transformer (GPT)]] | ||
Line 184: | Line 185: | ||
* [[(Tree) Recursive Neural (Tensor) Network (RNTN)]] | * [[(Tree) Recursive Neural (Tensor) Network (RNTN)]] | ||
− | === Time === | + | === [[Time]] === |
* [[Temporal Difference (TD) Learning]] | * [[Temporal Difference (TD) Learning]] | ||
* Predict values | * Predict values | ||
Line 238: | Line 239: | ||
= Techniques = | = Techniques = | ||
* [[Math for Intelligence]] | * [[Math for Intelligence]] | ||
− | |||
** [[Finding Paul Revere]] | ** [[Finding Paul Revere]] | ||
* [http://www.arxiv-sanity.com/ Arxiv Sanity Preserver] to accelerate research | * [http://www.arxiv-sanity.com/ Arxiv Sanity Preserver] to accelerate research | ||
Line 244: | Line 244: | ||
=== Methods & Concepts === | === Methods & Concepts === | ||
* [[Backpropagation]] | * [[Backpropagation]] | ||
+ | * [[Gradient Descent Optimization & Challenges|Stochastic Gradient Descent]] | ||
+ | * [[Gradient Descent Optimization & Challenges#Learning Rate Decay|Learning Rate Decay]] | ||
+ | * [[Pooling / Sub-sampling: Max, Mean|Max Pooling]] | ||
+ | * [[Data Quality#Batch Norm(alization) & Standardization|Batch Normalization]] | ||
* [[Overfitting Challenge]] | * [[Overfitting Challenge]] | ||
** [[Regularization]] | ** [[Regularization]] | ||
Line 266: | Line 270: | ||
* [[Deep Features]] | * [[Deep Features]] | ||
* [[Local Features]] | * [[Local Features]] | ||
− | * [[ | + | * [[Loop#Unintended Feedback Loop|Unintended Feedback Loop]] |
+ | * [[Backtesting]] | ||
− | === [[Learning Techniques]] === | + | === <span id="Learning Techniques"></span>[[Learning Techniques]] === |
− | * [[Text Transfer Learning]] | + | * [[PRIMO.ai#Supervised|Supervised Learning]] |
− | * [[Image/Video Transfer Learning]] | + | * [[PRIMO.ai#Unsupervised|Unsupervised Learning]] |
+ | * [[PRIMO.ai#Reinforcement Learning (RL)|Reinforcement Learning (RL)]] | ||
+ | * [[PRIMO.ai#Semi-Supervised|Semi-Supervised Learning]] | ||
+ | * [[PRIMO.ai#Self-Supervised|Self-Supervised Learning]] | ||
+ | * [[Deep Learning]] | ||
+ | * [[Transfer Learning]] a model trained on one task is re-purposed on a second related task | ||
+ | ** [[Text Transfer Learning]] | ||
+ | ** [[Image/Video Transfer Learning]] | ||
* [[Few Shot Learning]] | * [[Few Shot Learning]] | ||
− | |||
* [[Ensemble Learning]] | * [[Ensemble Learning]] | ||
* [[Multi-Task Learning (MTL)]] | * [[Multi-Task Learning (MTL)]] | ||
Line 281: | Line 292: | ||
* [[Neural Structured Learning (NSL)]] | * [[Neural Structured Learning (NSL)]] | ||
* [[Meta-Learning]] | * [[Meta-Learning]] | ||
− | * [[Human-in-the-Loop Learning]] | + | * [[Online Learning]] |
+ | * [[Human-in-the-Loop (HITL) Learning]] / Active Learning | ||
+ | * [[Decentralized: Federated & Distributed]] Learning | ||
=== Opportunities & Challenges === | === Opportunities & Challenges === | ||
Line 297: | Line 310: | ||
** [[Capsule Networks (CapNets)]] | ** [[Capsule Networks (CapNets)]] | ||
** [[Messaging & Routing]] | ** [[Messaging & Routing]] | ||
− | |||
− | |||
** [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU]] | ** [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU]] | ||
* [[Integrity Forensics]] | * [[Integrity Forensics]] | ||
Line 305: | Line 316: | ||
* [[Quantum]] | * [[Quantum]] | ||
− | = Development & Implementation = | + | = <span id="Development & Implementation"></span>Development & Implementation = |
* [[Building Your Environment]] | * [[Building Your Environment]] | ||
− | * [[AIOps / MLOps]] | + | * [[Algorithm Administration]] |
+ | ** [[Algorithm Administration#AIOps / MLOps|AIOps / MLOps]] | ||
* [[Service Capabilities]] | * [[Service Capabilities]] | ||
* [[AI Marketplace & Toolkit/Model Interoperability]] | * [[AI Marketplace & Toolkit/Model Interoperability]] | ||
Line 313: | Line 325: | ||
== No Coding == | == No Coding == | ||
− | * [[Automated | + | * [[Algorithm Administration#Automated Learning|Automated Learning]] |
* [[Neural Architecture]] Search (NAS) Algorithm | * [[Neural Architecture]] Search (NAS) Algorithm | ||
* [[Other codeless options, Code Generators, Drag n' Drop]] | * [[Other codeless options, Code Generators, Drag n' Drop]] | ||
Line 340: | Line 352: | ||
=== Tooling === | === Tooling === | ||
* [[Model Search]] | * [[Model Search]] | ||
− | * [[Model Monitoring]] | + | * [[Algorithm Administration#Model Monitoring|Model Monitoring]] |
* [[Notebooks]]; [[Jupyter]] and R Markdown | * [[Notebooks]]; [[Jupyter]] and R Markdown | ||
− | === [[Platforms: Machine Learning as a Service (MLaaS)]] === | + | === [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)]] === |
* [[Google]] Cloud Platform (GCP) | * [[Google]] Cloud Platform (GCP) | ||
* [[Amazon]] AWS | * [[Amazon]] AWS | ||
Line 361: | Line 373: | ||
− | + | <hr> | |
− | + | [http://www.etsy.com/shop/LittleHouseOnTheBay Little House On The Bay Soaps] | |
− | + | <hr> | |
If you get a 502 or 503 error please try the webpage again, as your message is visiting the island which the server is located, perhaps deciding to relax in the Sun before returning. Thank you. | If you get a 502 or 503 error please try the webpage again, as your message is visiting the island which the server is located, perhaps deciding to relax in the Sun before returning. Thank you. |
Revision as of 22:41, 1 December 2020
On Friday March 29, 2024 PRIMO.ai has 733 pages
Contents
- 1 Getting Started
- 2 Information Analysis
- 3 Algorithms
- 3.1 Predict values - Regression
- 3.2 Classification ...predict categories
- 3.3 Recommendation
- 3.4 Clustering - Continuous - Dimensional Reduction
- 3.5 Convolutional
- 3.6 Graph
- 3.7 Sequence / Time
- 3.8 Competitive
- 3.9 Semi-Supervised
- 3.10 Natural Language
- 3.11 Reinforcement Learning (RL)
- 3.12 Neuro-Symbolic
- 3.13 Other
- 4 Techniques
- 5 Development & Implementation
Getting Started
Overview
Background
AI Breakthroughs
- Capabilities
- Case Studies
- Artificial Intelligence | United States Patent and Trademark Office --> AI Patents after 2013
AI Fun
- Google AI Experiments
- TensorFlow Playground ...learn more
- TensorFlow.js Demos
- Google AIY Projects Program - Do-it-yourself artificial intelligence
- NVIDIA Playground
- Competitions
- Try GPT
- AI Dungeon 2 AI generated text adventure
.. more Natural Language Processing (NLP) fun...
- CoreNLP - see NLP parsing techniques by pasting your text | Stanford
- Sentiment Treebank Analysis Demo
How to...
- AI Solver for determining possible algorithms for your needs
- Strategy & Tactics for developing AI investments
- AI Governance to reduce unnecessary risks and assure success
- Evaluation of AI investments
- Checklists for ensuring consistency and completeness
Forward Thinking
Information Analysis
- Framing Context
- Data Science
- Visualization
- Hyperparameters
- Evaluation
- Train, Validate, and Test
Algorithms
- Algorithms; the engines of AI
- Model Zoos
- Graphical Tools for Modeling AI Components
Predict values - Regression
- Linear Regression
- Ridge Regression
- Lasso Regression
- Elastic Net Regression
- Bayesian Linear Regression
- Bayesian Deep Learning (BDL)
- Logistic Regression (LR)
- Support Vector Regression (SVR)
- Ordinal Regression
- Poisson Regression
- Tree-based...
- General Regression Neural Network (GRNN)
- One-class Support Vector Machine (SVM)
- Gradient Boosting Machine (GBM)
Classification ...predict categories
- Supervised
- Naive Bayes
- K-Nearest Neighbors (KNN)
- Perceptron (P) ...and Multi-layer Perceptron (MLP)
- Feed Forward Neural Network (FF or FFNN)
- Artificial Neural Network (ANN)
- Deep Learning - Deep Neural Network (DNN)
- Kernel Approximation - Kernel Trick
- Logistic Regression (LR)
- Softmax Regression; Multinominal Logistic Regression
- Tree-based...
- Apriori, Frequent Pattern (FP) Growth, Association Rules/Analysis
- Markov Model (Chain, Discrete Time, Continuous Time, Hidden)
- Unsupervised
Recommendation
Clustering - Continuous - Dimensional Reduction
- Singular Value Decomposition (SVD)
- Principal Component Analysis (PCA)
- K-Means
- Fuzzy C-Means (FCM)
- K-Modes
- Association Rule Learning
- Mean-Shift Clustering
- Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
- Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)
- Restricted Boltzmann Machine (RBM)
- Variational Autoencoder (VAE)
- Biclustering
- Multidimensional Scaling (MDS)
Hierarchical
- Hierarchical Cluster Analysis (HCA)
- Hierarchical Clustering; Agglomerative (HAC) & Divisive (HDC)
- Hierarchical Temporal Memory (HTM) Time
- Mixture Models; Gaussian
Convolutional
Deconvolutional
Graph
- includes social networks, sensor networks, the entire Internet, 3D Objects (Point Cloud)
- Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning
- Point Cloud
- A hierarchical RNN-based model to predict scene graphs for images
- A multi-granularity reasoning framework for social relation recognition
- Neural Structured Learning (NSL)
Sequence / Time
- Transformer
- Generative Pre-trained Transformer (GPT)
- Attention Mechanism/Transformer Model
- Transformer-XL
- Sequence to Sequence (Seq2Seq)
- End-to-End Speech
- Neural Turing Machine
- Recurrent Neural Network (RNN)
- (Tree) Recursive Neural (Tensor) Network (RNTN)
Time
- Temporal Difference (TD) Learning
- Predict values
Spatialtemporal
Spatial-Temporal Dynamic Network (STDN)
Competitive
- Generative Adversarial Network (GAN)
- Image-to-Image Translation
- Conditional Adversarial Architecture (CAA)
- Kohonen Network (KN)/Self Organizing Maps (SOM)
- Quantum Generative Adversarial Learning (QuGAN - QGAN)
Semi-Supervised
In many practical situations, the cost to label is quite high, since it requires skilled human experts to do that. So, in the absence of labels in the majority of the observations but present in few, semi-supervised algorithms are the best candidates for the model building. These methods exploit the idea that even though the group memberships of the unlabeled data are unknown, this data carries important information about the group parameters. Reference: Learning Techniques
- Semi-Supervised Learning with Generative Adversarial Network (SSL-GAN)
- Context-Conditional Generative Adversarial Network (CC-GAN)
Natural Language
- Natural Language Processing (NLP) involves speech recognition, (speech) translation, understanding (semantic parsing) complete sentences, understanding synonyms of matching words, and sentiment analysis
Reinforcement Learning (RL)
an algorithm receives a delayed reward in the next time step to evaluate its previous action. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. In combination with Neural Networks it is capable of solving more complex tasks. Policy Gradient (PG) methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by gradient descent.
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- State-Action-Reward-State-Action (SARSA)
- Q Learning
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Hierarchical Reinforcement Learning (HRL)
Neuro-Symbolic
the “connectionists” seek to construct artificial neural networks, inspired by biology, to learn about the world, while the “symbolists” seek to build intelligent machines by coding in logical rules and representations of the world. Neuro-Symbolic combines the fruits of group.
Other
Techniques
- Math for Intelligence
- Arxiv Sanity Preserver to accelerate research
Methods & Concepts
- Backpropagation
- Stochastic Gradient Descent
- Learning Rate Decay
- Max Pooling
- Batch Normalization
- Overfitting Challenge
- Manifold Hypothesis and Dimensional Reduction; identification - what influences an observed outcome
- Activation Functions
- Memory Networks
- Multiclassifiers; Ensembles and Hybrids; Bagging, Boosting, and Stacking
- Optimizers
- Neural Network Pruning
- Repositories & Other Algorithms
- DAWNBench An End-to-End Deep Learning Benchmark and Competition
- Knowledge Graphs
- Quantization
- Train, Validate, and Test
- Causation vs. Correlation
- Image Retrieval / Object Detection; Faster Region-based Convolutional Neural Networks (R-CNN), You only Look Once (YOLO), Single Shot Detector(SSD)
- Deep Features
- Local Features
- Unintended Feedback Loop
- Backtesting
Learning Techniques
- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning (RL)
- Semi-Supervised Learning
- Self-Supervised Learning
- Deep Learning
- Transfer Learning a model trained on one task is re-purposed on a second related task
- Few Shot Learning
- Ensemble Learning
- Multi-Task Learning (MTL)
- Apprenticeship Learning - Inverse Reinforcement Learning (IRL)
- Imitation Learning
- Simulated Environment Learning
- Lifelong Learning - Catastrophic Forgetting Challenge
- Neural Structured Learning (NSL)
- Meta-Learning
- Online Learning
- Human-in-the-Loop (HITL) Learning / Active Learning
- Decentralized: Federated & Distributed Learning
Opportunities & Challenges
- Generative Modeling
- Inside Out - Curious Optimistic Reasoning
- Nature
- Connecting Brains
- Architectures
- Integrity Forensics
- Metaverse
- Other Challenges in Artificial Intelligence
- Quantum
Development & Implementation
- Building Your Environment
- Algorithm Administration
- Service Capabilities
- AI Marketplace & Toolkit/Model Interoperability
- Evaluating an AI investment
No Coding
- Automated Learning
- Neural Architecture Search (NAS) Algorithm
- Other codeless options, Code Generators, Drag n' Drop
Coding
Libraries & Frameworks
TensorFlow
- TensorBoard
- TensorFlow Playground
- TensorFlow.js Demos
- TensorFlow.js
- TensorFlow Lite
- TensorFlow Serving
- Related...
Tooling
- Model Search
- Model Monitoring
- Notebooks; Jupyter and R Markdown
Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)
... and other leading organizations
- Allen Institute for Artificial Intelligence, or AI2
- OpenAI
- NIST
- Stanford University, MIT, UC Berkeley, Carnegie Mellon University, Princeton University, University of Oxford, University of Texas Austin, UCLA, Duke University, EPFL, Harvard University, Cornell University, ETH, Tsinghua University, National University of Singapore, University of Pennsylvania, Technion, University of Washington, UC San Diego, University of Maryland, Peking University, Georgia Institute of Technology, University of Illinois at Urbana-Champaign, University of Wisconsin Madison, University of Toronto, Université de Montréal - Mila, KAIST, Texas A&M University, RIKEN, University of Cambridge, Columbia University, UMass Amherst, National Institute for Research in Digital Science and Technology (INRIA), New York University, University College London, University of Southern California, Yale University, Yandex, Shanghai Jiao Tong University, University of Minnesota, University of Chicago, McGill University, Seoul National University, University of Tuebingen, University of Alberta, Rice University, Johns Hopkins University
If you get a 502 or 503 error please try the webpage again, as your message is visiting the island which the server is located, perhaps deciding to relax in the Sun before returning. Thank you.