Defense

From
Revision as of 14:08, 11 July 2024 by BPeat (talk | contribs) (NATO ~ North Atlantic Treaty Organization)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News



Virtual war is not a war between soldiers, tanks or airplanes, but a clash between algorithms. Here victory means the ability to build the basic rules determining how the world works. Inferior algorithms will simply operate according to the rules and outcomes set by more foundational software. Geopolitics used to mean the struggle to control the physical world. In the future it will be about the struggle to build a virtual one. - How Palantir Is Shaping the Future of Warfare | Bruno MAÇÃES - Time



AI Opportunities


To enable this change, the Department is adopting new technologies as part of its Digital Modernization program - from automation to Artificial Intelligence (Al) to 5G-enabled edge devices. ...Artificial Intelligence (AI) is a long-term data competency grounded in high-quality training quality datasets (TQD) that are the pieces of information and associated labels used to build algorithmic models. TQD and the algorithmic models will increasingly become DoD’s most valuable digital assets. As DoD modernizes and integrates AI technologies into joint warfighting, generating DoD-wide visibility of and access to these digital assets will be vital in an era of algorithmic warfare. We must also understand that our competitors gain advantage if these assets become compromised. ... modern governance framework for managing the lifecycle of the algorithm models and associated data that provides protected visibility and responsible brokerage of these digital assets. DoD Data Strategy


AI in Multi-domain Operations: Future Artificial Intelligence War
What does AI look like in Multi-domain Operations (MDO) or Cross-domain Operations? What does Future Artificial Intelligence look like in War? Where should we research and strive, and what should we avoid in Artificial Intelligence? ...Matthew Voke

Strengthening JADC2 Efforts with Better Data Accessibility and Agility
Leveraging a data fabric, A.I., and Machine Learning are all crucial components to the success of JADC2 for the Department of Defense.

Artificial Intelligence in Military: How will AI, Deep Learning, and Robotics Change Military
Progress in artificial intelligence (AI), deep learning, and robotics allow new capabilities that will affect military strategies assertively.

Warriors Corner: Artificial Intelligence
Brig. Gen. Matthew Easley, Director, Army AI Task Force

Artificial Intelligence and Quantum Technology: Implications for U.S. National Security
Hudson Institute hosted a discussion on the increasing risk that rapidly emerging advanced technologies pose to U.S. national security. Competitor nations such as Russia and China have devoted significant resources to the areas of artificial intelligence (AI) and quantum information science, particularly quantum computing. A recent report from the bipartisan Commission on the National Defense Strategy for the United States warned: “U.S. superiority in key areas of innovation is decreasing or has disappeared [while] U.S. competitors are investing heavily in innovation.” Given the technologies’ enormous promise for benefiting human kind, how should Washington respond to ensure U.S. military superiority while promoting the peaceful use of AI and quantum technology?

AI for defence
interview with Eric Segura at the Defence experience

What Disruptive Technologies and Artificial Intelligence Mean for NATO
This video establishes a baseline understanding of disruptive technology (DT), artificial intelligence (AI) and autonomy aimed at NATO policymakers and senior military officials. It explains disruptive technology and its historical importance for military innovation and changes in warfare, in addition to the current state of artificial intelligence and autonomous systems technology. The video features experts Dr. Andrew Moore, Dean of the School of Computer Science at Carnegie Mellon University and Co-Chair of the CNAS Task Force on AI and National Security, and Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at CNAS and Executive Director of the CNAS Task Force on AI and National Security.

Rise of the Terminators - Military Artificial Intelligence (AI) | Weapons that think for Themselves
Weapons and warfare have become increasingly sophisticated; the latest battlefield technology is starting to look more like a computer game with wirelessly connected soldiers communicating via sound and vision to drones carrying satellite-linked wi-fi hotspots & given orders by commanders that could be on the of the side of the world.

Palantir AIP | Defense and Military
Palantir AIP brings together the latest in large language models and cutting edge AI to activate data and models from the most highly sensitive environments in both a legal and ethical way. From classified networks, to devices on the tactical edge, find out how AIP can use industry leading guard rails to power responsible, effective and compliant AI-advantage for defense organizations.

AI Weapons, War and Ethics
This lecture explores the AI behind fully autonomous weapons and the arguments for and against their use in the world. A lecture by Yorick Wilks, Visiting Professor of Artificial Intelligence 05 November 2019 This lecture will explore fully autonomous weapons, the products of AI technology, and the arguments for and against their use. It will then look at the more complex issues of the ethical role of the state in the protection of its population, and the ethical choices of individuals versus those of corporations, whose role in large-scale military-industrial complexes is crucial. The lecture will also mention the emergence of a form of psychopathology in some weapons producers

A.I. Is Making it Easier to Kill (You). Here’s How. | NYT
A tank that drives itself. A drone that picks its own targets. A machine gun with facial recognition software. Sounds like science fiction? A.I. fueled weapons are already here.

Jay Tuck: AI - Humanity's Most Serious Challenge
US defense expert Jay Tuck was news director of the daily news program ARD-Tagesthemen and combat correspondent for GermanTelevision in two Gulf Wars. He has produced over 500 segments for the network. His investigative reports on security policy, espionage activities and weapons technology appear in leading newspapers, television networks and magazines throughout Europe, including Cicero, Focus, PC-Welt, Playboy, Stern, Welt am Sonntag and ZEITmagazin. He is author of a widely acclaimed book on electronic intelligence activities, “High-Tech Espionage” (St. Martin’s Press), published in fourteen countries.

Artificial Intelligence - A Threat to Strategic Stability?
In this seminar, Postdoctoral Research Fellow James Johnson explores: What is military artificial intelligence (AI) and how is it different from civilian AI? How does popular culture depict AI and what does it get wrong? The AI-cyber nexus AI “hunts for nukes” Drone swarming The promises and dangers of AI, he notes, do not exist in a vacuum. Not dissimilar from other weapons systems, AI needs only to be perceived as capable to have a destabilizing impact. He also emphasized that strategic advantages of AI-infused weapons may prove irresistible to states seeking to gain the technological upper hand over its rivals. He explores the multifaceted possible intersections of AI with nuclear weapons and suggests that AI-enhanced conventional weapons might pose one of the greatest risks to a nuclear escalation in future warfare scenarios, challenging long-held assumptions about deterrence, arms control, and crisis stability. Speaker: Dr. James Johnson, Postdoctoral Research Fellow at the James Martin Center for Nonproliferation Studies (CNS) at the Middlebury Institute of International Studies, Monterey. James holds a Ph.D. in Politics & International Relations from the University of Leicester, where he is also an honorary visiting fellow with the School of History & International Relations. Dr. Johnson has published peer-review articles with journals including the Pacific Review, Asian Security, Strategic Studies Quarterly, The Washington Quarterly, Defense & Security Analysis, The Journal of Cyber Policy, and Comparative Strategy. He is the author of The US-China Military & Defense Relationship during the Obama Presidency. His latest book, titled "Artificial Intelligence & the Future of Warfare: USA, China, and Strategic Stability", is under advanced contract with OUP/Manchester University Press. James is fluent in Mandarin.

How AI is driving a future of autonomous warfare | DW Analysis
The artificial intelligence revolution is just getting started. But it is already transforming conflict. Militaries all the way from the superpowers to tiny states are seizing on autonomous weapons as essential to surviving the wars of the future. But this mounting arms-race dynamic could lead the world to dangerous places, with algorithms interacting so fast that they are beyond human control. Uncontrolled escalation, even wars that erupt without any human input at all. DW maps out the future of autonomous warfare, based on conflicts we have already seen – and predictions from experts of what will come next. For more on the role of technology in future wars, check out the extended version of this video – which includes a blow-by-blow scenario of a cyber attack against nuclear weapons command and control systems.

NATO ~ North Atlantic Treaty Organization

NATO has recently updated its Artificial Intelligence (AI) strategy to address the rapid advancements and implications of AI technologies, particularly focusing on generative AI. This update comes amid a changing global landscape, including significant technological developments and geopolitical tensions.



First is tackling and conceptualizing interoperability as one of the bedrock challenges for integrating AI. ... shared between allies. - Calibrating NATO’s Vision of AI-Enabled Decision Support | Ian Reynolds & yasir Atalan - Center for Strategic and International Studies



The revised strategy emphasizes the potential benefits of AI while acknowledging the risks associated with its misuse, such as the spread of disinformation and the manipulation of information operations.

Key aspects of the revised AI strategy include:

  • Generative AI and Disinformation: The strategy highlights the dual-edged nature of generative AI, recognizing both its immense potential for positive impact and the risk it poses to society and security through the generation of disinformation. It underscores the importance of NATO members being vigilant against AI-generated disinformation and ensuring that AI applications are used responsibly and in accordance with NATO's Principles of Responsible Use (PRUs).
  • Responsible AI Practices: Building on the principles established in 2021, the revised strategy advocates for the systematic and institutionalization of best practices in AI development and deployment. This reflects a broader international trend, with NATO allies responding positively to the call for responsible AI practices.
  • Testing and Evaluation of AI Technologies: The strategy places a strong emphasis on the need for thorough testing, evaluation, verification, and validation (TEV&V) of AI technologies. This includes setting up an Alliance-wide AI TEV&V landscape to support the adoption of responsible AI. The strategy specifically mentions utilizing the network of DIANA-affiliated Test Centers, indicating a move towards a more structured approach to assessing AI technologies' safety and effectiveness.
  • Employment of Generative AI: While the strategy does not specify how NATO should employ generative AI, it encourages the exploration of its potential benefits. Precedents from the United States suggest that large language models could be used to manage extensive data sets, automate routine tasks, and enhance analytical capabilities in areas like signal intelligence and logistics.

This revised AI strategy marks NATO's commitment to leveraging AI technologies for strategic advantage while ensuring that these tools are used ethically and securely. It reflects a nuanced understanding of the challenges and opportunities presented by AI, emphasizing the need for continuous vigilance, rigorous testing, and adherence to responsible use principles.


Chief Digital and Artificial Intelligence Office (CDAO)

YouTube ... Quora ...Google search ...Google News ...Bing News ...CDAO +ai News

The Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) is the senior official responsible for the acceleration of the DoD’s adoption of data, analytics, and AI to generate decision advantage across, from the boardroom to the battlefield. Stood up in February 2022 by integrating the Joint Artificial Intelligence Center (JAIC), Defense Digital Services (DDS), the Chief Data Officer, and the enterprise platform Advana into one organization, the CDAO is building a strong foundation for data, analytic, and AI-enabled capabilities to be developed and fielded at scale. Part of this foundation is ensuring the Department has the necessary people, platforms, and processes needed to continuously provide business leaders and warfighters with agile solutions.

The CDAO will perform several critical functions in close coordination with the Services, Joint Staff, CIO, USD (R&E), and other digital leaders:

  • Lead the Department’s strategy and policy on data, analytics, and AI adoption, as well as govern and oversee efforts across the Department.
  • Enable the development of digital and AI-enabled solutions across the Department, while also selectively scaling proven solutions for enterprise and joint use cases.
  • Provide a sophisticated cadre of technical experts that serve as a de facto data and digital response force able to address urgent crises and emerging challenges with state of the art digital solutions.

The CDAO’s functions reflect the rising strategic value of information to decision-making and advanced capabilities from the boardroom to the battlefield. The CDAO’s form reflects the leadership the Department needs to accelerate its progress in harnessing information within a rapidly changing technology landscape.

The CDAO achieved full operating capability on 1 June 2022 and is expected to have an immediate impact by providing several concrete deliverables this year.

  • Review and more tightly integrate the Department’s policy, strategy, and governance of data, analytics, AI, to include an integrated Data, Analytics and AI Strategy as well as maturing a Responsible AI Ecosystem.
  • Provide the enterprise-level infrastructure and services that enable efforts to advance adoption of data, analytics, and AI, to include an expanded and more accessible enterprise data repository and data catalogue with designated authoritative data sources, common data models for enterprise and joint use cases, as well associated coding and algorithms to serve as a “public good” as Department stakeholders put data on the offensive.
  • Solve and scale enterprise and joint use cases, including executive analytics to measure progress on implementation of the forthcoming 2022 National Defense Strategy, a common operational picture for Combatant Commanders from the operational to the strategic level as part of the Advancing Data and AI (ADA) initiative, and better tools and analytics to assist the Department’s senior leaders and Combatant Commanders with dynamic campaigning.



ICIT 2019 Briefing: The JAIC - Using AI to Transform & Secure the DoD w/ Col. Trent, DoD
ICIT Fall 2019 Briefing in Washington D.C. Featuring Colonel Stoney Trent, Chief of Missions, Joint Artificial Intelligence Center, DoD

2019 ICIT Briefing: Insights w/ DoD JAIC Chief of Missions, Colonel Stoney Trent
ICIT 2019 Fall Briefing in Washington D.C. : DoD, Joint Artificial Intelligence Center, Chief of Missions Colonel Stoney Trent answers "What is the most important lesson that attendees should take away from your remarks at today's ICIT briefing?"

Harnessing Artificial Intelligence - AI in the DoD (Lecture #18)
Harnessing Artificial Intelligence - AI in the DoD (Lecture #18, Dec. 2, 2019); By Dr. Bret Michael, Professor, NPS Department of Computer Science / Department of Electrical and Computer Engineering

Online Event: A Conversation with JAIC Director Lt. Gen. John N.T. “Jack” Shanahan
Please join the International Security Program on Friday, May 29th at 9:30 am ET for a conversation with Lieutenant General Jack Shanahan, the Director of the Department of Defense Joint Artificial Intelligence Center. Established in 2018, the JAIC is the organization leading the Defense Department’s efforts to operationalize AI for national security. Beyond technology development, the JAIC serves as the focal point for AI governance in defense and the institution of ethical principles for AI use. This conversation will explore how AI is being integrated into the defense enterprise and how the Department is adapting talent acquisition and management to the needs of a 21st century digital organization. AI and subsets like machine learning also have the potential to reshape and even transform intelligence missions. The conversation will also explore how these technologies can better empower U.S. intelligence and how the Department of Defense and broader Intelligence Community can best leverage these technologies for future defense and intelligence missions.

Data will be the Fuel

Data will be the fuel and the engine for everything the Defense Department has to do to bring intelligence and operations together, DoD's chief information officer told CIOs and technology leaders from across the department in a virtual global town hall meeting.

Dana Deasy said during the Aug. 12 event that quality data that is secure will also help to enable the development of artificial intelligence. With AI, humans and machines are going to collaborate effectively and efficiently in an ethical manner, Deasy said, lauding the progress being made by the Joint Artificial Intelligence Center's work over the last 18 months. DoD Leaders Provide Digital Modernization Updates | David Vergun - DoD News

US Defense Advanced Research Projects Agency (DARPA)

Youtube search... ...Google search ...Google News

DARPA continues to lead innovation in AI research as it funds a broad portfolio of R&D programs, ranging from basic research to advanced technology development. DARPA believes this future, where systems are capable of acquiring new knowledge through generative contextual and explanatory models, will be realized upon the development and application of “Third Wave” AI technologies. DARPA announced in September 2018 a multi-year investment of more than $2 billion in new and existing programs called the “AI Next” campaign. Key areas of the campaign include automating critical DoD business processes, such as security clearance vetting or accrediting software systems for operational deployment; improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power, data, and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning. AI Next Campaign | DARPA

DARPA OFFSET Program Calls for Second Swarm Sprints
The focus of this swarm sprint is on enabling improved swarm autonomy through enhancements of swarm platforms and/or autonomy elements, with the operational backdrop of utilizing a diverse swarm of 50 air and ground robots to isolate an urban objective within an area of two square city blocks over a mission duration of 15 to 30 minutes. Swarm Sprinters will leverage existing or develop new hardware components, swarm algorithms, and/or swarm primitives to enable novel capabilities that specifically showcase the advantages of a swarm when leveraging and operating in complex urban environments.

A DARPA Perspective on Artificial Intelligence
What's the ground truth on artificial intelligence (AI)? In this video, John Launchbury, the Director of DARPA's Information Innovation Office (I2O), attempts to demystify AI--what it can do, what it can't do, and where it is headed. Through a discussion of the "three waves of AI" and the capabilities required for AI to reach its full potential, John provides analytical context to help understand the roles AI already has played, does play now, and could play in the future.

AI and Security
In the future, every company will be using AI, which means that every company will need a secure infrastructure that addresses AI security concerns. At the same time, the domain of computer security has been revolutionized by AI techniques, including machine learning, planning, and automatic reasoning. What are the opportunities for researchers in both fields—security infrastructure and AI—to learn from each other and continue this fruitful collaboration? This session will cover two main topics. In the first half, we will discuss how AI techniques have changed security, using a case study of the DARPA Cyber Grand Challenge, where teams built systems that can reason about security in real time. In the second half, we will talk about security issues inherent in AI. How can we ensure the integrity of decisions from the AI that drives a business? How can we defend against adversarial control of training data? Together, we will identify common problems for future research.

ERI Summit 2020: Artificial Intelligence, Autonomy, and Processing
Mr. Gilman Louie, Commissioner, National Security Commission on Artificial Intelligence (NSCAI) AI To Revolutionize Radios and Communications (Related Programs: FRANC, PEACH, HyDDENN) Dr. Y.K. Chen, DARPA Dr. Jan M. Rabaey, University of California Berkeley Dr. Silvija Filipovic, Perspecta Labs Dr. Sudhakar Pamarti, University of California Los Angeles Lifelong Learning Systems (Related Program: L2M) Mr. Ted Senator, DARPA Dr. Eric Eaton, University of Pennsylvania Ferroelectronics Lightning Talk Dr. Ali Keshavarzi, DARPA Quantum Inspired Algorithms Lightning Talk Dr. Bryan Jacobs, DARPA Visit https://eri-summit.darpa.mil/ for more details on the ERI Summit.


Dogfight

Youtube search... ...Google search

The DARPA Air Combat Evolution (ACE) program, which involves AI development and demonstration in three program phases:

  • modeling and simulation
  • sub-scale aircraft
  • full-scale aircraft testing

Ultimately, ACE will be flying AI algorithms on live aircraft to demonstrate trusted, scalable, human-level autonomy for air combat.

AlphaDogfight Trials is a precursor to the DARPA ACE program. The DARPA AlphaDogfight Trials aim to demonstrate the feasibility of developing effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight. AlphaDogfight Trials Competition #3 is being broadcast live from the Johns Hopkins University Applied Physics Lab (JHU/APL) via a ZoomGov Webinar on 18-20 August 2020. DARPA’s AlphaDogfight Trials seeks to advance the state of artificial intelligence (AI) technologies applied to air combat operations. The trials are a computer-based competition designed to demonstrate advanced AI algorithms that can perform simulated within-visual-range air combat maneuvering, otherwise known as a dogfight. The goal is to use the dogfight as the challenge problem to increase performance and trust in AI algorithms and bring together the AI research and operator communities. In August 2019, DARPA selected eight technically and organizationally diverse teams to compete in the AlphaDogfight Trials with the purpose to energize and expand a base of researchers and developers applying AI technologies to complex operational problems. The first of three AlphaDogfight Trials competition events was held at JHU/APL in November 2019. Trial #1 was an exhibition match with the opportunity for teams to compete against different APL developed adversary agents and test the simulation environment at scale. Trial #2 held in January 2020, was the first competition where teams were ranked against each other and tested their agents against more challenging adversary agents. Trial #3 is the final competition.

The U.S. military recently conducted a real-world dogfight between a human pilot and an AI-controlled F-16 fighter jet. This groundbreaking test involved a heavily modified two-seat F-16D X-62A, also known as the Variable-stability In-flight Simulator Test Aircraft (VISTA), going head-to-head with another F-16. The AI-controlled aircraft performed defensive and offensive maneuvers, getting as close as 2,000 feet to the crewed aircraft. This test was part of the Air Combat Evolution (ACE) program, which has been developing autonomous combat systems with AI-controlled aircraft. However, the U.S. military has not disclosed who won the dogfight. This secrecy is likely due to the sensitive nature of the information and the potential implications for future warfare strategies. Despite this, the program is said to be progressing even faster than officials had hoped. It’s a significant step in the evolution of AI in aerial combat and could potentially revolutionize the future of unmanned aerial vehicles (UAVs).

DARPA ACE & USAF X-62A Achieve World First for AI in Aerospace
DARPA’s Air Combat Evolution (ACE) program has achieved the first-ever in-air tests of AI algorithms autonomously flying a fighter jet against a human-piloted fighter jet in within-visual-range combat scenarios (sometimes referred to as “dogfighting”).

In this video, team members discuss what makes the ACE program unlike other aerospace autonomy projects and how it represents a transformational moment in aerospace history, establishing a foundation for ethical, trusted, human-machine teaming for complex military and civilian applications.

In flight, the ACE AI algorithms controlled a specially modified F-16 test aircraft known as the X-62A, or VISTA (Variable In-flight Simulator Test Aircraft), at the Air Force Test Pilot School at Edwards Air Force Base, California, where all demonstrations of autonomous combat maneuvers took place in 2023 and are continuing in 2024.!

The World's First AI-Flown Fighter Jet Can Dogfight
In a joint project between DARPA and the US Air Force, a special aircraft called the X-62 Vista (based on the F-16) became the first tactical aircraft to be piloted by artificial intelligence.

DARPA's Initiative Shows that U.S is Ahead of China & Russia in Military Artificial Intelligence
An AI algorithm piloting an F-16 Fighting Falcon in a simulated dogfight against a seasoned US Air Force pilot achieved a perfect score with five straight wins in a competition. The Defense Advanced Research Projects Agency (DARPA) held the final round of its third and final AlphaDogfight competition on Thursday, putting an AI system designed by Heron Systems against a human pilot in a "simulated within-visual-range air combat" situation. According to Breaking Defense, Heron's AI went head-to-head with a graduate of the Air Force's Weapons Instructor Course with the callsign "Banger". An expert commentator, DARPA's Justin Mock, said that the AI algorithm demonstrated "superhuman aiming ability" during the dogfight and the human pilot couldn't score a single hit. In this video Defense Updates analyzes how AI beating fighter pilot in virtual dog fight shows that the U.S military is stealing a march in this important area?

Watch DARPA's AI vs. Human in Virtual F-16 Aerial Dogfight
See Heron (AI) and Banger (Human) battle it out in a virtual, aerial F-16 dogfight. And watch who wins!

Project Maven

That's a good question. Project Maven, officially known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was a project initiated by the United States Department of Defense (DoD) in 2017. Its primary objective was to accelerate the adoption of artificial intelligence (AI) and machine learning (ML) technologies to analyze massive amounts of data collected by military drones and other sources . The project aimed to improve the speed and accuracy of military decision-making and reduce human errors. Some of the applications of Project Maven included detecting and tracking objects of interest, such as vehicles, buildings, and people, in full-motion video; identifying and classifying objects in still imagery; and enhancing situational awareness and threat detection. The project also involved collaborating with industry partners, such as Google, Microsoft, Amazon, and IBM, to leverage their expertise and resources in AI and ML.

The Pentagon is using Google's AI for drones
The Department of Defense and Google have worked on a few projects in the past, but their newest initiative may improve the efficiency of our drones. Learn more about this story at www.newsy.com/77397/

Algorithmic Warfare: The Next Military-Technical Revolution?
Robert Work, 32nd United States Deputy Secretary of Defense, discusses Algorithmic Warfare, Project Maven, and why data is the fuel that will lead to a revolutionary period in warfare

US Defense Information Systems Agency (DISA)

Youtube search... ...Google search ...Google News



To be the trusted provider to connect and protect the warfighter in cyberspace. - DISA's Vision



Implement AI and machine learning (ML) to support the cyber defenders in identifying malicious actors through the automated analysis of cyber sensors, threat indicators, and system outputs. As the AI/ML program matures, we are closer to systems fighting systems, reducing workhours invested. This consistent environment facilitates global shared workflow and integrated operations between global, regional, and mission partner commands. DISA Strategic Plan 2019-2022

Dana Deasy – DISA Forecast to Industry 2018
Dana Deasy, chief information officer for the Department of Defense, speaks about the national defense strategy and his four key focus areas at the DISA Forecast to Industry on Nov. 5, 2018.

Navy Vice Adm. Nancy Norton - DISA Forecast to Industry 2018
Navy Vice Adm. Nancy Norton, DISA director and Joint Force Headquarters - DoD Information Network (JFHQ-DODIN) commander, speaks about trusted partnerships at the DISA Forecast to Industry on Nov. 5, 2018.

Anthony Montemarano – DISA Forecast to Industry 2018
Anthony Montemarano, DISA senior procurement executive and executive deputy director, provides an overview of the agency’s structure and its key leaders at the DISA Forecast to Industry on Nov. 5, 2018.

Dr. Brian Hermann – DISA Forecast to Industry 2018
Dr. Brian Hermann, acting services development executive, speaks about agency contracting and acquisition opportunities for fiscal year 2019 and 2020 at the DISA Forecast to Industry on Nov. 5, 2018.

DISA 2019 Look Book
USDISA

DISA Communicator Video
Video premiered at DISA’s Forecast to Industry 2019 by Army Maj. Gen. Garrett Yee, assistant to the director.

DISA's Responsible Use of Generative AI Tools

The Defense Information Systems Agency (DISA) Director, Lt. Gen. Robert Skinner, has referred to generative AI as "one of the most disruptive" technological developments in a "very long, long time". Skinner has also expressed concern about how generative AI models could be used for widespread disinformation and offensive cyberattacks. However, he has also emphasized the potential benefits of generative AI in helping individuals to get up to the level of high-end adversaries in a much faster manner. DISA has added generative AI to its tech watch list and is seeking industry help to leverage the technology. At the AFCEA TechNet Cyber conference, DoD leaders discussed the potential of generative AI models and emphasized the need to leverage the technology as a force multiplier. They also stated that pausing generative AI research would be a mistake, as adversaries are not pausing their research. In summary, while DISA and DoD leaders recognize the potential risks associated with generative AI, they also see the potential benefits and are seeking to leverage the technology to enhance their defense systems.

Responsible Artificial Intelligence in the Military (REAIM)

Youtube search... ...Google search ...Google News

Azure Government Top Secret

Youtube search... ...Google search ...Google News

  • Azure Government Top Secret now generally available for US national security missions -Microsoft ... serves the national security mission and empowers leaders across the Intelligence Community (IC), Department of Defense (DoD), and Federal Civilian agencies to innovate securely wherever the mission requires and at all data classifications, with a continuum of technology from on-premises to cloud to the tactical edge. ... to enable data fusion across a diverse range of data sources, we’ve built a solution accelerator called Multi-INT enabled discovery (MINTED) that leverages raw data and metadata as provided and enriches the data with machine learning techniques. These techniques are either pre-trained or unsupervised, providing a no-touch output as a catalyst for any analytic workflow. This becomes useful for many initial triage scenarios, such as forensics, where an analyst is given an enormous amount of data and few clues as to what’s important.
  • Accredited for Azure Government Secret: Azure Stack Hub, Azure Stack Edge, Azure Data Box | Microsoft ... announcing the Department of Defense (DoD) Impact Level 6 (IL6) accreditation for Azure Government Secret of several mission-critical device services for Azure Stack Hub, Azure Stack Edge, and Azure Data Box.