Difference between revisions of "PRIMO.ai"

From
Jump to: navigation, search
(Natural Language)
(Natural Language)
(39 intermediate revisions by the same user not shown)
Line 30: Line 30:
 
* [[Google AIY Projects Program]]  - Do-it-yourself artificial intelligence
 
* [[Google AIY Projects Program]]  - Do-it-yourself artificial intelligence
 
* [http://www.nvidia.com/en-us/research/ai-playground/ NVIDIA Playground]
 
* [http://www.nvidia.com/en-us/research/ai-playground/ NVIDIA Playground]
 +
* [http://talktotransformer.com/ Try GPT-2...Talk to Transformer] - completes your text. | [http://adamdking.com/ Adam D King], [http://huggingface.co/ Hugging Face] and [http://openai.com/ OpenAI]
 
* [[Competitions]]
 
* [[Competitions]]
  
 
=== How to... ===
 
=== How to... ===
*[[AI Solver]]
+
*[[AI Solver]] for determining possible algorithms for your needs
*[[Strategy & Tactics]]
+
*[[Strategy & Tactics]] for developing applications
*[[Checklists]]
+
*[[Checklists]] for ensuring consistency and completeness
  
 
=== Forward Thinking ===
 
=== Forward Thinking ===
Line 45: Line 46:
 
* [[Framing Context]]
 
* [[Framing Context]]
 
* [[Datasets]]
 
* [[Datasets]]
 +
* [[Imbalanced Data]]
 
* [[Data Preprocessing]]
 
* [[Data Preprocessing]]
 +
* [[Data Augmentation]], Data Labeling, and Auto-Tagging
 
* [[Feature Exploration/Learning]]
 
* [[Feature Exploration/Learning]]
 
* [[Batch Norm(alization) & Standardization]]
 
* [[Batch Norm(alization) & Standardization]]
 
* [[Hyperparameter]]s
 
* [[Hyperparameter]]s
 
* [[Zero Padding]]
 
* [[Zero Padding]]
* [[Data Augmentation]]
+
* [[Train, Validate, and Test]]
* [[Visualization]]
 
 
* Model Assessment:
 
* Model Assessment:
 
** [http://www.kdnuggets.com/2018/04/right-metric-evaluating-machine-learning-models-1.html Choosing the Right Metric for Evaluating Machine Learning Models]
 
** [http://www.kdnuggets.com/2018/04/right-metric-evaluating-machine-learning-models-1.html Choosing the Right Metric for Evaluating Machine Learning Models]
Line 57: Line 59:
 
** [[Evaluation Measures - Classification Performance]]
 
** [[Evaluation Measures - Classification Performance]]
 
**  [[Explainable Artificial Intelligence (XAI)]]
 
**  [[Explainable Artificial Intelligence (XAI)]]
 +
* [[Visualization]]
 
* [[Master Data Management  (MDM) / Feature Store / Data Lineage / Data Catalog]]
 
* [[Master Data Management  (MDM) / Feature Store / Data Lineage / Data Catalog]]
 +
* [[Data Interoperability]]
  
 
= [[Algorithms]] =
 
= [[Algorithms]] =
 
+
* [[Model Zoos]]
  
 
== Predict values - [[Regression]] ==  
 
== Predict values - [[Regression]] ==  
Line 96: Line 100:
 
*** [[Decision Jungle]]
 
*** [[Decision Jungle]]
 
** [[Apriori, Frequent Pattern (FP) Growth, Association Rules/Analysis]]
 
** [[Apriori, Frequent Pattern (FP) Growth, Association Rules/Analysis]]
** [[Markov Model (Chain, Discrete Time, Continuous Tme, Hidden)]]
+
** [[Markov Model (Chain, Discrete Time, Continuous Time, Hidden)]]
 
* [[Unsupervised]]
 
* [[Unsupervised]]
 
** [[Radial Basis Function Network (RBFN)]]
 
** [[Radial Basis Function Network (RBFN)]]
Line 137: Line 141:
  
 
== Graph ==
 
== Graph ==
- includes social networks, sensor networks, the entire Internet, 3D Objects (point cloud)
+
- includes social networks, sensor networks, the entire Internet, 3D Objects ([[Point Cloud]])
 
* [[Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning]]  
 
* [[Graph Convolutional Network (GCN), Graph Neural Networks (Graph Nets), Geometric Deep Learning]]  
* [[Point Cloud Convolutional Neural Network (PCCNN)]]
+
* [[Point Cloud]]  
 
* [http://techxplore.com/news/2019-04-hierarchical-rnn-based-scene-graphs-images.html A hierarchical RNN-based model to predict scene graphs for images]
 
* [http://techxplore.com/news/2019-04-hierarchical-rnn-based-scene-graphs-images.html A hierarchical RNN-based model to predict scene graphs for images]
 
* [http://techxplore.com/news/2019-01-multi-granularity-framework-social-recognition.html A multi-granularity reasoning framework for social relation recognition]
 
* [http://techxplore.com/news/2019-01-multi-granularity-framework-social-recognition.html A multi-granularity reasoning framework for social relation recognition]
 +
* [[Neural Structured Learning (NSL)]]
  
 
== Sequence / Time ==
 
== Sequence / Time ==
 
* [[Sequence to Sequence (Seq2Seq)]]
 
* [[Sequence to Sequence (Seq2Seq)]]
 +
* [[End-to-End Speech]]
 
* [[Neural Turing Machine]]
 
* [[Neural Turing Machine]]
 
* [[Recurrent Neural Network (RNN)]]
 
* [[Recurrent Neural Network (RNN)]]
Line 178: Line 184:
 
* [[Natural Language Processing (NLP)]] involves speech recognition, (speech) translation, understanding (semantic parsing) complete sentences, understanding synonyms of matching words, and sentiment analysis  
 
* [[Natural Language Processing (NLP)]] involves speech recognition, (speech) translation, understanding (semantic parsing) complete sentences, understanding synonyms of matching words, and sentiment analysis  
 
** Current State of the Art:
 
** Current State of the Art:
 +
*** [[Attention Mechanism/Model - Transformer Model]]
 +
**** [[Generative Pre-trained Transformer-2 (GPT-2)]] ..[http://talktotransformer.com/ Talk To Transformer]
 +
**** [[Bidirectional Encoder Representations from Transformers (BERT)]]
 +
**** [[XLNet]] extends [[Transformer-XL]]
 
*** [[(Deep) Convolutional Neural Network (DCNN/CNN)]]  
 
*** [[(Deep) Convolutional Neural Network (DCNN/CNN)]]  
 
*** [[Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)]]
 
*** [[Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)]]
*** [[Attention Mechanism/Model - Transformer Model]]
 
*** [[Bidirectional Encoder Representations from Transformers (BERT)]]
 
*** [[XLNet]] extends [[Transformer-XL]]
 
 
** Methods:
 
** Methods:
*** [[Natural Language Processing (NLP)#Workbench / Pipeline |Workbench / Pipeline]]
 
 
*** [[Natural Language Processing (NLP)#Text Preprocessing |Text Preprocessing]]
 
*** [[Natural Language Processing (NLP)#Text Preprocessing |Text Preprocessing]]
 
**** [[Natural Language Processing (NLP)#Text Regular Expressions (Regex) |Regular Expressions (Regex)]]
 
**** [[Natural Language Processing (NLP)#Text Regular Expressions (Regex) |Regular Expressions (Regex)]]
**** [[Natural Language Processing (NLP)#Tokenization / Sentence Splitting |Tokenization / Sentence Splitting]]
+
**** [[Natural Language Processing (NLP)#Soundex |Soundex]]
 +
**** [[Natural Language Processing (NLP)#Tokenization / Sentence Splitting |Tokenization / Sentence Splitting]]
 +
***** [[Natural Language Processing (NLP)#Word Embeddings |Word Embeddings]]
 +
**** [[Natural Language Processing (NLP)#Normalization |Normalization]]
 +
***** [[Natural Language Processing (NLP)#Stemming (Morphological Similarity) |Stemming (Morphological Similarity)]]
 +
***** [[Natural Language Processing (NLP)#Lemmatization |Lemmatization]]
 +
**** [[Natural Language Processing (NLP)#Similarity |Similarity]]
 +
***** [[Natural Language Processing (NLP)#Word Similarity |Word Similarity]]
 +
***** [[Natural Language Processing (NLP)#Text Clustering |Text Clustering]]
 +
***** [[Natural Language Processing (NLP)#Sentence/Document Similarity |Sentence/Document Similarity]]
 +
***** [[Natural Language Processing (NLP)#Text Classification |Text Classification]]
 +
***** [[Natural Language Processing (NLP)#Topic Modeling |Topic Modeling]]
 
**** [[Natural Language Processing (NLP)#Whole Word Masking |Whole Word Masking]]
 
**** [[Natural Language Processing (NLP)#Whole Word Masking |Whole Word Masking]]
 
**** [[Natural Language Processing (NLP)#Identity Scrubbing |Identity Scrubbing]]
 
**** [[Natural Language Processing (NLP)#Identity Scrubbing |Identity Scrubbing]]
 
**** [[Natural Language Processing (NLP)#Stop Words |Stop Words]]
 
**** [[Natural Language Processing (NLP)#Stop Words |Stop Words]]
**** [[Natural Language Processing (NLP)#Stemming (Morphological Similarity) |Stemming (Morphological Similarity)]]
+
*** [[Natural Language Processing (NLP)#Relating Text |Relating Text]]
**** [[Natural Language Processing (NLP)#Soundex |Soundex]]
 
**** [[Natural Language Processing (NLP)#Lemmatization |Lemmatization]]
 
*** [[Natural Language Processing (NLP)#Parsing/Relating Text |Parsing/Relating Text]]
 
 
**** [[Natural Language Processing (NLP)#Part-of-Speech (POS) Tagging |Part-of-Speech (POS) Tagging]]
 
**** [[Natural Language Processing (NLP)#Part-of-Speech (POS) Tagging |Part-of-Speech (POS) Tagging]]
**** [[Natural Language Processing (NLP)#Chunking |Chunking]]
+
**** [[Natural Language Processing (NLP)#Chunking |Chunking]] - chunks or patterns, e.g. telephone number
**** [[Natural Language Processing (NLP)#Chinking |Chinking]]
+
**** [[Natural Language Processing (NLP)#Chinking |Chinking]] - unwanted chunk removal
 
**** [[Natural Language Processing (NLP)#Named Entity Recognition (NER) |Named Entity Recognition (NER)]]
 
**** [[Natural Language Processing (NLP)#Named Entity Recognition (NER) |Named Entity Recognition (NER)]]
 
**** [[Natural Language Processing (NLP)#Relation Extraction |Relation Extraction]]
 
**** [[Natural Language Processing (NLP)#Relation Extraction |Relation Extraction]]
 
**** [[Natural Language Processing (NLP)#Neural Coreference |Neural Coreference]]
 
**** [[Natural Language Processing (NLP)#Neural Coreference |Neural Coreference]]
**** [[Natural Language Processing (NLP)#Text Classification |Text Classification]]
 
**** [[Natural Language Processing (NLP)#Text Clustering |Text Clustering]]
 
**** [[Natural Language Processing (NLP)#Similarity |Similarity]]
 
*** [[Natural Language Processing (NLP)#Corpora |Corpora]]
 
*** [[Natural Language Processing (NLP)#Topic Modeling |Topic Modeling]]
 
*** [[Natural Language Processing (NLP)#Word Embeddings |Word Embeddings]]
 
 
** [[Natural Language Processing (NLP)#Natural Language Understanding (NLU) |Natural Language Understanding (NLU)]] or Natural Language Interpretation (NLI)
 
** [[Natural Language Processing (NLP)#Natural Language Understanding (NLU) |Natural Language Understanding (NLU)]] or Natural Language Interpretation (NLI)
*** [[Natural Language Processing (NLP)#Ontologies |Ontologies]]
+
*** [[Natural Language Processing (NLP)#Managed Vocabularies |Managed Vocabularies]]
 +
**** [[Natural Language Processing (NLP)#Corpora |Corpora]]
 +
**** [[Natural Language Processing (NLP)#Ontologies |Ontologies]] and [[Natural Language Processing (NLP)#Taxonomies |Taxonomies]]  
 
*** [[Natural Language Processing (NLP)#Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)|Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]]
 
*** [[Natural Language Processing (NLP)#Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)|Natural Language Inference (NLI) and Recognizing Textual Entailment (RTE)]]
 
**** [[Natural Language Processing (NLP)#Semantic Role Labeling (SRL) |Semantic Role Labeling (SRL)]]
 
**** [[Natural Language Processing (NLP)#Semantic Role Labeling (SRL) |Semantic Role Labeling (SRL)]]
Line 216: Line 226:
 
**** [[Natural Language Processing (NLP)#Sentiment Analysis |Sentiment Analysis]]
 
**** [[Natural Language Processing (NLP)#Sentiment Analysis |Sentiment Analysis]]
 
**** [[Natural Language Processing (NLP)#Wikifier |Wikifier]]
 
**** [[Natural Language Processing (NLP)#Wikifier |Wikifier]]
 +
*** [[Natural Language Processing (NLP)#Workbench / Pipeline |Workbench / Pipeline]]
 
* [[Natural Language Generation (NLG)]] involves writing/generating complete grammatically correct sentences and paragraphs
 
* [[Natural Language Generation (NLG)]] involves writing/generating complete grammatically correct sentences and paragraphs
  
 
== [[Reinforcement Learning (RL)]]  ==
 
== [[Reinforcement Learning (RL)]]  ==
an algorithm receives a delayed reward in the next time step to evaluate its previous action. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. In combination with Neural Networks it is capable of solving more complex tasks.
+
an algorithm receives a delayed reward in the next time step to evaluate its previous action. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. In combination with Neural Networks it is capable of solving more complex tasks. [[Policy Gradient (PG)]] methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by [[Gradient Descent Optimization & Challenges |gradient descent]].
  
 +
* [[Monte Carlo]] (MC) Method - Model Free Reinforcement Learning
 
* [[Markov Decision Process (MDP)]]
 
* [[Markov Decision Process (MDP)]]
* [[Deep Reinforcement Learning (DRL)]] - DeepRL
+
* [[Q Learning]]
* [[Distributed Deep Reinforcement Learning (DeepRL)]]
 
* [[Deep Q Learning (DQN)]]
 
* [[Neural Coreference]]
 
 
* [[State-Action-Reward-State-Action (SARSA)]]
 
* [[State-Action-Reward-State-Action (SARSA)]]
* [[Deep Deterministic Policy Gradient (DDPG)]]
+
* [[Deep Reinforcement Learning (DRL)]] DeepRL
* [[Trust Region Policy Optimization (TRPO)]]
+
* [[Distributed Deep Reinforcement Learning (DDRL)]]
* [[Proximal Policy Optimization (PPO)]]
+
* [[Deep Q Network (DQN)]]
 +
* [[Evolutionary Computation / Genetic Algorithms]]
 +
* [[Actor Critic]]
 
* [[Hierarchical Reinforcement Learning (HRL)]]
 
* [[Hierarchical Reinforcement Learning (HRL)]]
  
Line 286: Line 297:
 
* [[Simulated Environment Learning]]
 
* [[Simulated Environment Learning]]
 
* [[Lifelong Learning]] - Catastrophic Forgetting Challenge
 
* [[Lifelong Learning]] - Catastrophic Forgetting Challenge
 +
* [[Neural Structured Learning (NSL)]]
  
 
=== Opportunities & Challenges ===
 
=== Opportunities & Challenges ===
Line 301: Line 313:
 
** [[Capsule Networks (CapNets)]]  
 
** [[Capsule Networks (CapNets)]]  
 
** [[Messaging & Routing]]  
 
** [[Messaging & Routing]]  
** [[Pipelines]]
+
** [[Pipeline]]s
 
** [[Federated]]
 
** [[Federated]]
 
** [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU]]
 
** [[Processing Units - CPU, GPU, APU, TPU, VPU, FPGA, QPU]]
Line 309: Line 321:
 
= Development & Implementation =
 
= Development & Implementation =
 
* [[Building Your Environment]]
 
* [[Building Your Environment]]
* [[Pipelines]]
+
* [[Pipeline]]s
 
* [[Service Capabilities]]
 
* [[Service Capabilities]]
 
* [[AI Marketplace & Toolkit/Model Interoperability]]  
 
* [[AI Marketplace & Toolkit/Model Interoperability]]  
Line 352: Line 364:
 
* [[Intel]]
 
* [[Intel]]
 
* [[Apple]]
 
* [[Apple]]
 +
=== ... and other leading organizations ===
 +
* [http://allenai.org/ Allen Institute for Artificial Intelligence, or AI2]
 +
* [http://openai.com/ OpenAI]
 +
  
==== [http://machinelearning.apple.com/ Apple] ====
 
* [[Turi]]
 
  
  

Revision as of 08:55, 13 November 2019

On Friday April 19, 2024 PRIMO.ai has 736 pages

Getting Started

Overview

Background

AI Breakthroughs

AI Fun

How to...

Forward Thinking

Information Analysis

Algorithms

Predict values - Regression

Classification ...predict categories

Recommendation

Clustering - Continuous - Dimensional Reduction

Hierarchical

Convolutional

Deconvolutional

Graph

- includes social networks, sensor networks, the entire Internet, 3D Objects (Point Cloud)

Sequence / Time

Time

Spatialtemporal

Spatial-Temporal Dynamic Network (STDN)

Competitive

Semi-Supervised

In many practical situations, the cost to label is quite high, since it requires skilled human experts to do that. So, in the absence of labels in the majority of the observations but present in few, semi-supervised algorithms are the best candidates for the model building. These methods exploit the idea that even though the group memberships of the unlabeled data are unknown, this data carries important information about the group parameters.

Natural Language

Reinforcement Learning (RL)

an algorithm receives a delayed reward in the next time step to evaluate its previous action. Therefore based on those decisions, the algorithm will train itself based on the success/error of output. In combination with Neural Networks it is capable of solving more complex tasks. Policy Gradient (PG) methods are a type of reinforcement learning techniques that rely upon optimizing parametrized policies with respect to the expected return (long-term cumulative reward) by gradient descent.

Neuro-Symbolic

the “connectionists” seek to construct artificial neural networks, inspired by biology, to learn about the world, while the “symbolists” seek to build intelligent machines by coding in logical rules and representations of the world. Neuro-Symbolic combines the fruits of group.

Other

Techniques

Methods & Concepts

Advanced Learning

Opportunities & Challenges

Development & Implementation

No Coding

Coding

Libraries & Frameworks

TensorFlow

Tooling

Platforms: Machine Learning as a Service (MLaaS)

... and other leading organizations





If you get a 502 or 503 error please try the webpage again, as your message is visiting the island which the server is located, perhaps deciding to relax in the Sun before returning. Thank you.