Difference between revisions of "Class"

From
Jump to: navigation, search
m
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
{{#seo:
 +
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS
 +
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
 +
}}
 
https://courses.nvidia.com/dashboard
 
https://courses.nvidia.com/dashboard
 +
 +
* [https://www.google.com/search?q=Yuval+Mazor+%40nividia.com&oq=Yuval+Mazor+%40nividia.com&aqs=chrome..69i57j69i58.9126j0j8&sourceid=chrome&ie=UTF-8 Yuval Mazor | NVIDIA]
 +
 +
Jupyer notebooks:  Shift+Enter to run cell
 +
  
 
Linguistic Concepts
 
Linguistic Concepts
Line 8: Line 19:
 
*recursion
 
*recursion
  
= Word Embeddings =
+
= Word [[Embedding]]s =
  
 
* HMMS, CRF, PGMs
 
* HMMS, CRF, PGMs
Line 19: Line 30:
  
  
*Firth 1957 Distributional Hypothess
+
*Firth 1957 Distributional Hypotheses
  
 
* Word Cloud
 
* Word Cloud
  
= Text Classification =
+
** Text Classification  
= Text/Machine Translation (MNT) =
+
** Text/Machine Translation (MNT)
 
 
  
 
= Financial News =
 
= Financial News =
 
Yuval
 
  
 
Tools:
 
Tools:
Line 40: Line 48:
 
Multi-channel LSTM Network
 
Multi-channel LSTM Network
 
Keras wih TensorFlow
 
Keras wih TensorFlow
Utilize the GloVe and FastText Skipgram pretrained embeddings, allows he underlying network to access larger feature space to build complex features on top of.
+
Utilize the GloVe and FastText Skipgram pretrained [[embedding]]s, allows he underlying network to access larger feature space to build complex features on top of.
  
Can use utilize combinations of various corpus and embedding methods for better performance
+
Can use utilize combinations of various corpus and [[embedding]] methods for better performance
  
Bidirectional LSTM network is used o encode sequential information on the embedding layers.
+
Bidirectional LSTM network is used o encode sequential information on the [[embedding]] layers.
  
Dense layer to project fnal output classification
+
Dense layer to project final output classification
  
Use embedding...
+
Use [[embedding]]...
embeddings = transfer learning   
+
[[embedding]]s = transfer learning   
  
 
? CNN vs BI-LSTM (RNN) this approach, BI-LSTM does not need a lot of data
 
? CNN vs BI-LSTM (RNN) this approach, BI-LSTM does not need a lot of data
Line 56: Line 64:
 
                     ... not a fixed vector size
 
                     ... not a fixed vector size
  
* [http://nlp.stanford.edu/projects/glove/ GloVe]
+
* [https://nlp.stanford.edu/projects/glove/ GloVe]
* [http://fasttext.cc/ Fasttext]
+
* [https://fasttext.cc/ Fasttext]
* [http://www.slideshare.net/chartbeat/mockup-infographicv4-27900399 News articles per day]
+
* [https://www.slideshare.net/chartbeat/mockup-infographicv4-27900399 News articles per day]
* [http://github.com/philipperemy/financial-news-dataset News data source]
+
* [https://github.com/philipperemy/financial-news-dataset News data source]
* [http://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/ Word embeddings]
+
* [https://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/ Word embeddings]
* [http://en.wikipedia.org/wiki/Natural-language_processing Natural Language Processing]  
+
* [https://en.wikipedia.org/wiki/Natural-language_processing Natural Language Processing]  
* [http://en.wikipedia.org/wiki/Sentiment_analysis Sentiment Analysis]
+
* [https://en.wikipedia.org/wiki/Sentiment_analysis Sentiment Analysis]
  
 
= Deep Autoencoders for Anomaly Detection =
 
= Deep Autoencoders for Anomaly Detection =
  
Variable Auencoders
+
Variable Autoencoders
  
 
* clustering - latent layers may tell you what number of clusters
 
* clustering - latent layers may tell you what number of clusters
Line 82: Line 90:
  
 
reconstruction error is a signal - just one signal, consider a basket of signals
 
reconstruction error is a signal - just one signal, consider a basket of signals
 +
 +
backtesting - if we run this on previous historical events how well does our algorithm work?  (don't use training data !!)
 +
 +
Using Pandas...
 +
ori_dataset_categ_transformed.head(10)
 +
for i, val in enumerate(list(ori_dataset_categ_transformed.iloc[1])):
 +
    if val is 1:
 +
        print("Got 1 at {}".format(i))

Latest revision as of 19:07, 26 June 2023

https://courses.nvidia.com/dashboard

Jupyer notebooks: Shift+Enter to run cell


Linguistic Concepts

  • conference - anaphors
  • gang of four design
  • null subject
  • recursion

Word Embeddings

  • HMMS, CRF, PGMs
    • CBoW -Bag of Words / ngrams - feature per word/n items
    • 1-hot Sparse input - create a vector the size of the entire vocabulary
  • Stop Words
  • TF-IDF

Word2Vec

Skip-Gram


  • Firth 1957 Distributional Hypotheses
  • Word Cloud
    • Text Classification
    • Text/Machine Translation (MNT)

Financial News

Tools:

  • Glove
    • dot product
  • FastText
    • Skipgram
    • Continuous bag of words

Multi-channel LSTM Network Keras wih TensorFlow Utilize the GloVe and FastText Skipgram pretrained embeddings, allows he underlying network to access larger feature space to build complex features on top of.

Can use utilize combinations of various corpus and embedding methods for better performance

Bidirectional LSTM network is used o encode sequential information on the embedding layers.

Dense layer to project final output classification

Use embedding... embeddings = transfer learning

? CNN vs BI-LSTM (RNN) this approach, BI-LSTM does not need a lot of data

Attention mechanism -- translate ... you can look back

                   ... not a fixed vector size

Deep Autoencoders for Anomaly Detection

Variable Autoencoders

  • clustering - latent layers may tell you what number of clusters
  • anomaly detection

https://courses.nvidia.com/courses/course-v1:DLI+L-FI-06+V1/info

PCA or TSenee


Statistical Arbitrage

arbitrage - monies, stocks (price is better than it should be - fair market value) how right, or how rich? Mean inversion Autoencoder learn the fair market value, then feed in current value

reconstruction error is a signal - just one signal, consider a basket of signals

backtesting - if we run this on previous historical events how well does our algorithm work? (don't use training data !!)

Using Pandas... ori_dataset_categ_transformed.head(10) for i, val in enumerate(list(ori_dataset_categ_transformed.iloc[1])):

   if val is 1:
       print("Got 1 at {}".format(i))