YouTube
... Quora
...Google search
...Google News
...Bing News
- Risk, Compliance and Regulation ... Ethics ... Privacy ... Law ... AI Governance ... AI Verification and Validation
- Data Quality ... validity, accuracy, cleaning, completeness, consistency, encoding, padding, augmentation, labeling, auto-tagging, normalization, standardization, imbalanced data
- Artificial General Intelligence (AGI) to Singularity ... Curious Reasoning ... Emergence ... Moonshots ... Explainable AI ... Automated Learning
- Policy ... Policy vs Plan ... Constitutional AI ... Trust Region Policy Optimization (TRPO) ... Policy Gradient (PG) ... Proximal Policy Optimization (PPO)
- Strategy & Tactics ... Project Management ... Best Practices ... Checklists ... Project Check-in ... Evaluation ... Measures
- AI Governance / Algorithm Administration
- Data Science ... Governance ... Preprocessing ... Exploration ... Interoperability ... Master Data Management (MDM) ... Bias and Variances ... Benchmarks ... Datasets
- Development ... Notebooks ... AI Pair Programming ... Codeless ... Hugging Face ... AIOps/MLOps ... AIaaS/MLaaS
- Libraries & Frameworks Overview ... Libraries & Frameworks ... Git - GitHub and GitLab ... Other Coding options
- AI Solver ... Algorithms ... Administration ... Model Search ... Discriminative vs. Generative ... Train, Validate, and Test
- AI Agent Optimization ... Optimization Methods ... Optimizer ... Objective vs. Cost vs. Loss vs. Error Function ... Exploration
- Policy ... Policy vs Plan ... Constitutional AI ... Trust Region Policy Optimization (TRPO) ... Policy Gradient (PG) ... Proximal Policy Optimization (PPO)
- Testing of Artificial Intelligence | Sogeti
- Other Challenges in Artificial Intelligence
- Data Science Concepts Explained to a Five-year-old | Megan Dibble - Toward Data Science
Guardrails AI
Guardrails AI is a Python package for specifying structure and type, validating and correcting the outputs of large language models (LLMs). Guardrails AI works by wrapping around LLM API calls to structure, validate, and correct the outputs. It can be used to enforce a wide range of requirements, such as:
- Ensuring that the output is of a certain type (e.g., JSON, Python code, etc.)
- Checking for bias in the output
- Identifying and correcting factual errors
- Preventing the output from containing certain keywords or phrases
Guardrails AI can be used to improve the safety and reliability of LLMs in a wide range of applications, such as:
- Generating text for websites and blogs
- Writing code and scripts
- Translating languages
- Answering questions in a comprehensive and informative way
Here are some examples of how Guardrails AI can be used:
- A news organization could use Guardrails AI to ensure that the articles it generates are free of bias and factual errors.
- A software company could use Guardrails AI to generate code that is well-formatted and bug-free.
- A customer service chatbot could use Guardrails AI to ensure that its responses are helpful and informative.
Testing
Covering both..
- Testing ‘of’ AI
- Testing ‘with’ AI
EXTENT-2016: Machine Learning and Software Testing
EXTENT-2016: Software Testing & Trading Technology Trends 22 June, 2016, 10 Paternoster Square, London. Machine Learning and Software Testing Iosif Itkin, CEO, Exactpro, LSEG
|
|
|
03 AI and Machine Learning for Testers Jason Arbon, Appdiff
PNSQC
|
|
Webinar – AI in Test Automation
Unlocking the Business Value of Test Automation “Ai in Test Automation”. This webinar help to achieve enhanced quality with further reduction in testing cost and faster test cycles.
|
|
|
Intelligent automation
Manoj Mathen Self learning and healing automation test suite for software validation
|
|
Validate and monitor your AI and machine learning models
Advanced machine learning and AI models get more and more powerful. They also tend to become more complicated to validate and monitor.
This has a major impact in business’ adoption of models. Initial validation and monitoring are not only critical to ensure the model’s sound performance, but they are also mandatory in some industries like banking and insurance. In this workshop, you will learn the best techniques that can be applied manually or automatically to validate and monitor statistical models. Assessment techniques below will be discussed and demonstrated to perform a full model validation:
- Bias and variance error
- Model selection
- Discriminatory Variables
- Adversarial Sensitivity About Olivier: Olivier is cofounder and Head of Data Science of Moov AI, a data science consulting company. He is co-holder of a patent for having created an advanced algorithm for assessing borrowing capacity. Olivier is a data science expert with extensive experience in supporting and implementing projects in companies in different sectors in their digital transformation: financial services, technology, aerospace, telecommunications and consumer products. He led the data team and implemented a data culture at Pratt & Whitney Canada, L’Oréal and GSoft.
|
|
|
The Future of Software Testing with AI | Jason Arbon | STAREAST
In this interview, Jason Arbon, the CEO of Appdiff, explains how artificial intelligence is going to change the way that we test our software. He also explains how he launched his application, Appdiff, and what benefits it can bring to your team.
|
|
Verification & Validation (A Software Testing Approach)
INFO 3501 Chapter 10 The IT Project Quality Plan
|
|
|
Improving Testing through Automation and AI | Tariq King | STARWEST
In this interview, Tariq King, the director of test engineering at Ultimate Software, discusses how we can innovate and make our testing better through smarter automation and the use of artificial intelligence.
https://starwest.techwell.com/ He also explains the fundamentals of white box testing so you can find bugs as soon as they happen, and do more thorough, targeted testing during software development.
|
|
A/B Testing
YouTube search...
...Google search
A/B testing (also known as bucket testing or split-run testing) is a user experience research methodology. A/B tests consist of a randomized experiment with two variants, A and B. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare two versions of a single variable, typically by testing a subject's response to variant A against variant B, and determining which of the two variants is more effective. Wikipedia
Talking Bayes to Business: A/B Testing Use Case | Shopify
New York City: https://www.datacouncil.ai/new-york-city Bayesian tools for statistical analysis have made huge leaps forward in the past years, and are on their way to becoming an industry standard. On the other hand, as machine learning tools become more and more prevalent, many issues have come to light: dealing properly with uncertainty, understanding causality, the concept of "statistical significance", etc. In this talk I'll walk you through a basic A/B testing use case and demonstrate the strength challenges of Bayesian tools in dealing with these issues. Yizhar (Izzy) Toren (M.Sc) is a Bayesian by belief, but a frequentist and ML data scientist engineer by trade. He graduated from Tel Aviv University's (TAU) interdisciplinary program with a Master's degree in Mathematics, worked in multiple industries (bio-tech, gaming, financial services, retail, etc.) and has more than 15 years of experience as a data scientist, consultant and "plain old" statistician. He is also an amateurcrossfitter and considers himself a pretty decent Paleo cook - feel free to ask him for recipes!
|
|
|
Facebook's A B Platform Interactive Analysis in Realtime - @Scale 2014 - Data
Itamar Rosenn, Engineering Manager at Facebook This talk presents Facebook's platform for streamlined and automated A/B test analysis. We will discuss the system architecture, how it enables interactive analysis on near-realtime results, and the challenges we've faced in scaling the system to meet our customer needs.
|
|
Amazon AI Conclave 2019 - Contextual Bandits for Efficient A/B Testing
Speaker - Saurabh Gupta, Applied Scientist, Amazon and Bharathan Balaji, Research Scientist, Amazon Amazon AI Conclave 2019 is the leading Artificial Intelligence and Machine learning conference held on Dec 19 -20, 2019 in Bangalore. It hosted over 1200 delegates comprising business and technology leaders from startups and enterprises, data scientists, ML developers, data engineers and architects. Sessions at the event covered some of the Amazon’s broadest and deepest set of machine learning and AI services.
|
|
|
Incorporating AI in A/B Testing - Pavel Dmitriev
Despite the rapidly growing number of applications of A.I., accurately measuring the quality of A.I. solutions remains a challenge. In this talk, I will highlight the issues with traditional approaches to evaluating A.I. systems and explain how A/B testing - the gold standard for measuring causal effects - can be used to resolve them. I will share practical learnings and pitfalls from a decade of applying A/B testing to evaluate A.I. systems, which practitioners will be able to apply in their domains. SUBSCRIBE: https://bit.ly/SubscribeMagnimind
JOIN OUR SLACK COMMUNITY: https://bit.ly/AI-ML-DataScience-Lovers Magnimind Academy TV Presents - July 2020
|
|
Multivariate Testing (MVT)
YouTube search...
...Google search
Multivariate testing is a technique for testing a hypothesis in which multiple variables are modified. The goal of multivariate testing is to determine which combination of variations performs the best out of all of the possible combinations. Websites and mobile apps are made of combinations of changeable elements. A multivariate test will change multiple elements, like changing a picture and headline at the same time. Three variations of the image and two variations of the headline are combined to create six versions of the content, which are tested concurrently to find the winning variation. Optimizely
To A/B or Multivariate Test? That is the Question!
If you want to get the most ROI from your tests you need to use the right type of test. Both A/B and Multivariate (MVT) testing have a lot in common, but knowing when to apply each one can greatly increase the value you reap from your test results. Trudi Miller, PhD, Oracle Maxymiser's Technical Trainer will cover: What the primary differences are between A/B and MVT testing When it is better to use A/B and when it is more appropriate to use MVT How to use insights gathered from A/B tests and MVT testing
|
|
|
Introduction to multivariate data analysis using vegan
Get started using the vegan package for R for multivariate data analysis and community ecology Further information about the webinar is in the GitHub repo
|
|