Watch me Build a Cybersecurity Startup

From
Jump to: navigation, search

YouTube search... ...Google search


Architecture

PREVIOUS: From client's website to machine learning on AWS (backend):

  1. Customer's Data
  2. Customer's Transaction Credit Card Data @ checkout page

PROPOSED: For GDPR privacy compliance, Customer data is encrypted and then machine learning is performed on clients's website and only 'encrypted weights file' is sent to AWS Backend Pipeline... for salable real-time federated learning solution.


Transaction Processing

Client: Snip-it on client's website to fire a Lambda function on the server ---


<script async src='https://www.our-aws-app.com/analytics.js'></script>



Amazon AWS Backend: Pipeline ---


Machine Learning Service for Cybersecurity

PREVIOUS: Client-Server Model

PROPOSED: Federated Learning in a cloud


Analytics Dashboard

Client: Dashboard ---

Amazon AWS Backend ---

  • AWS tools:
    • Glue a fully managed extract, transform, and load (ETL) service to prepare and load data for analytics
    • Crawlers to populate the AWS Glue Data Catalog with tables
    • Athena interactive query service to analyze data in Amazon S3 using standard SQL


DharmaSecurity

I've built a Demo app called DharmaSecurity, a Cybersecurity fraud detection tool for businesses. The way it works is that once signed up, a business will paste a code snippet into their website, and then they'll get access to a dashboard that tells them how many fraudulent accounts they have. Our app will use machine learning to automatically remove suspected fraud accounts, and flag likely ones for review. To build this, I use a suite of AWS tools, Python, JavaScript, a Logistic Regression (LR) model, a credit card fraud dataset, and a library called OpenMined to enable federated learning and secure multi-party computation. I've packed a lot into this video, animations, code, music, screencasts, skits, etc. Enjoy! Code for "a Cybersecurity Startup" | Siraj Raval - GitHub

Credit Card Fraud Detection | Nick Walker

Using Under-sampling techniques and Logistic Regression (LR) in order to predict credit card fraud

This is the Kernel submission for the Kaggle competition "Credit Card Fraud Detection". The dataset contains 28 Principal Component Analysis (PCA) transformed features of transactions made by credit cards in September 2013 by European cardholders. The dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 total transactions (0.172% of total).

Because of the highly unbalanced nature of the dataset, I used a Confusion Matrix to calculate the Precision and Recall of my results. I also used the Under-sampling technique in order to take a smaller amount of the normal transactions that occurred and train a logistic regressor based on this. I trained and applied the logistic regressor on all of the data, on only the under-sampled data, and then I used the logistic regressor trained on the under-sampled data and applied it to all of the data. My recall scores for each were as follows:

  • The logistic regressor trained on and applied to all of the data: 0.52
  • The logistic regressor trained on and applied to only the under-sampled data: 0.91
  • The logistic regressor trained on the under-sampled data and applied to all of the data: 0.92

As you can see from my results above, the logistic regressor trained on the undersampled data and applied to all of the data had the best results, with a 92% recall rate. A fairly good start for applying the under-sampling technique on only a logistic regressor.

About the Dataset | Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson & Gianluca Bontempi

The datasets contains transactions made by credit cards in September 2013 by European cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.

It contains only numerical input variables which are the result of a Principal Component Analysis (PCA) transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with Principal Component Analysis (PCA), the only features which have not been transformed with PCA are 'Time' and 'Amount'.

  • Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset.
  • The feature 'Amount' is the transaction Amount, this feature can be used for example-dependent cost-sensitive learning.
  • Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.

Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion Matrix accuracy is not meaningful for unbalanced classification.

The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on http://mlg.ulb.ac.be/BruFence and http://mlg.ulb.ac.be/ARTML

Cite: Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Under-sampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015