Difference between revisions of "Class"

From
Jump to: navigation, search
Line 63: Line 63:
 
* [http://en.wikipedia.org/wiki/Natural-language_processing Natural Language Processing]  
 
* [http://en.wikipedia.org/wiki/Natural-language_processing Natural Language Processing]  
 
* [http://en.wikipedia.org/wiki/Sentiment_analysis Sentiment Analysis]
 
* [http://en.wikipedia.org/wiki/Sentiment_analysis Sentiment Analysis]
 +
 +
= Deep Autoencoders for Anomaly Detection =
 +
 +
Variable Auencoders
 +
 +
* clustering - latent layers may tell you what number of clusters
 +
* anomaly detection
 +
 +
https://courses.nvidia.com/courses/course-v1:DLI+L-FI-06+V1/info
 +
 +
PCA or TSenee
 +
 +
Mean inversion
 +
Statistical Arbitrage
 +
arbitrage - monies, stocks (price is better than it should be - fair market value)  how right, or how rich?
 +
 +
Autoecoder learn the fair market value, then feed in current value

Revision as of 14:25, 23 October 2018

https://courses.nvidia.com/dashboard

Linguistic Concepts

  • conference - anaphors
  • gang of four design
  • null subject
  • recursion

Word Embeddings

  • HMMS, CRF, PGMs
    • CBoW -Bag of Words / ngrams - feature per word/n items
    • 1-hot Sparse input - create a vector the size of the entire vocabulary
  • Stop Words
  • TF-IDF

Word2Vec

Skip-Gram


  • Firth 1957 Distributional Hypothess
  • Word Cloud

Text Classification

Text/Machine Translation (MNT)

Financial News

Yuval

Tools:

  • Glove
    • dot product
  • FastText
    • Skipgram
    • Continuous bag of words

Multi-channel LSTM Network Keras wih TensorFlow Utilize the GloVe and FastText Skipgram pretrained embeddings, allows he underlying network to access larger feature space to build complex features on top of.

Can use utilize combinations of various corpus and embedding methods for better performance

Bidirectional LSTM network is used o encode sequential information on the embedding layers.

Dense layer to project fnal output classification

Use embedding... embeddings = transfer learning

? CNN vs BI-LSTM (RNN) this approach, BI-LSTM does not need a lot of data

Attention mechanism -- translate ... you can look back

                   ... not a fixed vector size

Deep Autoencoders for Anomaly Detection

Variable Auencoders

  • clustering - latent layers may tell you what number of clusters
  • anomaly detection

https://courses.nvidia.com/courses/course-v1:DLI+L-FI-06+V1/info

PCA or TSenee

Mean inversion Statistical Arbitrage arbitrage - monies, stocks (price is better than it should be - fair market value) how right, or how rich?

Autoecoder learn the fair market value, then feed in current value