Watch me Build a Cybersecurity Startup
YouTube search... ...Google search
- Risk, Compliance and Regulation ... Ethics ... Privacy ... Law ... AI Governance ... AI Verification and Validation
- Cybersecurity ... OSINT ... Frameworks ... References ... Offense ... NIST ... DHS ... Screening ... Law Enforcement ... Government ... Defense ... Lifecycle Integration ... Products ... Evaluating
- How do I leverage Artificial Intelligence (AI)?
Contents
[hide]Architecture
PREVIOUS: From client's website to machine learning on AWS (backend):
- Customer's Data
- Customer's Transaction Credit Card Data @ checkout page
PROPOSED: For GDPR privacy compliance, Customer data is encrypted and then machine learning is performed on clients's website and only 'encrypted weights file' is sent to AWS Backend Pipeline... for salable real-time federated learning solution.
Transaction Processing
Client: Snip-it on client's website to fire a Lambda function on the server ---
- JavaScript
- OpenMined
- syft.js a client-side microlibrary for running PySyft operations in JavaScript
<script async src='https://www.our-aws-app.com/analytics.js'></script>
Amazon AWS Backend: Pipeline ---
- Building Open Source Google Analytics from Scratch | Pavel Tiunov - cube.js
- JavaScript
- NPM JavaScript package registry
- Yarn package manager for JavaScript
- Node.js - JavaScript runtime built on Chrome's V8 JavaScript engine
- AWS SDK for JavaScript in Node.js providing JavaScript objects for AWS services including Amazon S3
- NPM JavaScript package registry
- Data formats:
- cURL command to transfer data from or to a server; proxy support, user authentication, FTP upload, HTTP post, SSL connections, cookies, file transfer resume, Metalink, and more
- AWS tools:
- Run applications and services without thinking about servers Serverless
- Amazon API Gateway - create, publish, maintain, monitor, and secure APIs at any scale; create REST and WebSocket APIs that act as a “front door” for applications to access data, business logic, or functionality from backend services
- Lambda - run code without managing servers
- Kinesis - collect, process, and analyze real-time, streaming data
- Kinesis Data Streams (KDS) continuously capture gigabytes of data per second from hundreds of thousands of sources
- Kinesis Data Firehose - capture, transform, and load streaming data into data lakes, data stores and analytics tools
- Simple Storage Service (S3) - object storage
Machine Learning Service for Cybersecurity
- Logistic Regression (LR)
- Clearbit uses email and IP addresses to identify fake and fraudulent accounts
PREVIOUS: Client-Server Model
- REST API for Fraud Detection
- Python
- Pandas
- scikit-learn - sklearn
PROPOSED: Federated Learning in a cloud
Analytics Dashboard
Client: Dashboard ---
- Building Open Source Google Analytics from Scratch | Pavel Tiunov - cube.js
- AWS Web Analytics Dashboard example from Pavel's article
- Cube.js is an open-source modular framework for building analytical web applications. It is optimized for speed and scalability.
- cube.js | GitHub -project contains a web analytics proof of concept (POC) built with Cube.js
- AWS Web Analytics Dashboard demo
- Analytics Dashboard React Application
- React = JavaScript library for building user interfaces
- Create React App | GitHub
Amazon AWS Backend ---
- AWS tools:
DharmaSecurity
I've built a Demo app called DharmaSecurity, a Cybersecurity fraud detection tool for businesses. The way it works is that once signed up, a business will paste a code snippet into their website, and then they'll get access to a dashboard that tells them how many fraudulent accounts they have. Our app will use machine learning to automatically remove suspected fraud accounts, and flag likely ones for review. To build this, I use a suite of AWS tools, Python, JavaScript, a Logistic Regression (LR) model, a credit card fraud dataset, and a library called OpenMined to enable federated learning and secure multi-party computation. I've packed a lot into this video, animations, code, music, screencasts, skits, etc. Enjoy! Code for "a Cybersecurity Startup" | Siraj Raval - GitHub
Credit Card Fraud Detection | Nick Walker
- Credit Card Fraud Detection | Nick Walker - GitHub Using Under-sampling techniques and Logistic Regression (LR) in order to predict credit card fraud.
Using Under-sampling techniques and Logistic Regression (LR) in order to predict credit card fraud
This is the Kernel submission for the Kaggle competition "Credit Card Fraud Detection". The dataset contains 28 Principal Component Analysis (PCA) transformed features of transactions made by credit cards in September 2013 by European cardholders. The dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 total transactions (0.172% of total).
Because of the highly unbalanced nature of the dataset, I used a Confusion Matrix to calculate the Precision and Recall of my results. I also used the Under-sampling technique in order to take a smaller amount of the normal transactions that occurred and train a logistic regressor based on this. I trained and applied the logistic regressor on all of the data, on only the under-sampled data, and then I used the logistic regressor trained on the under-sampled data and applied it to all of the data. My recall scores for each were as follows:
- The logistic regressor trained on and applied to all of the data: 0.52
- The logistic regressor trained on and applied to only the under-sampled data: 0.91
- The logistic regressor trained on the under-sampled data and applied to all of the data: 0.92
As you can see from my results above, the logistic regressor trained on the undersampled data and applied to all of the data had the best results, with a 92% recall rate. A fairly good start for applying the under-sampling technique on only a logistic regressor.
About the Dataset | Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson & Gianluca Bontempi
The datasets contains transactions made by credit cards in September 2013 by European cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
It contains only numerical input variables which are the result of a Principal Component Analysis (PCA) transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with Principal Component Analysis (PCA), the only features which have not been transformed with PCA are 'Time' and 'Amount'.
- Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset.
- The feature 'Amount' is the transaction Amount, this feature can be used for example-dependent cost-sensitive learning.
- Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion Matrix accuracy is not meaningful for unbalanced classification.
The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on http://mlg.ulb.ac.be/BruFence and http://mlg.ulb.ac.be/ARTML
Cite: Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Under-sampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015