
AI with AI: Artificial Intelligence with Andy Ilachinski
100 episodes — Page 2 of 2
No Time to AI
Andy and Dave discuss the latest in AI news, starting with the US Consumer Products Safety Commission report on AI and ML. The Deputy Secretary of Defense outlines Responsible AI Tenets, along with mandating the JAIC to start work on four activities for developing a responsible AI ecosystem. The Director of the US Chamber of Commerce's Center for Global Regulatory Cooperation outlines concerns with the European Commission's newly drafted rules on regulating AI. Amnesty International crowd-sources an effort to identify surveillance cameras that the New York City Police Department has in use, resulting in a map of over 15,000 camera locations. The Royal Navy uses AI for the first time at sea against live supersonic missiles. And the Ghost Fleet Overlord unmanned surface vessel program completes its second autonomous transit from the Gulf Coast, through the Panama Canal, and to the West Coast. Finally, CNA Russia Program team members Sam Bendett and Jeff Edmonds join Andy and Dave for a discussion on their latest report, which takes a comprehensive look at the ecosystem of AI in Russia, including its policies, resourcing, infrastructure, and activities. Click here to visit our website and explore the links mentioned in the episode.
Someday My 'Nets Will Code
Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20 To RSVP contact Larry Lewis at [email protected]. Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council's Panel of Experts, which notes the March 2020 use of the "fully autonomous" Kargu-2 to engage retreating forces; it's unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses an 825GB data "Pile" to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as "code generators," creating a benchmark (the Automated Programming Progress Standard) to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3's 175B parameter model performing much worse (presumably due to the inability to fine-tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with "Someday." And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally, Diana Gehlhaus from Georgetown's Center for Security and Emerging Technology (CSET), joins Andy and Dave to preview an upcoming event, "Requirements for Leveraging AI." Interview with Diana Gehlhaus: 33:32 Click here to visit our website and explore the links mentioned in the episode.
Just the Tip of the Skyborg
Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20 To RSVP contact Larry Lewis at [email protected]. Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force's Skyborg autonomy core system. The UK Office for AI publishes a new set of guidance on automated decision-making in government, with Ethics, Transparency and Accountability Framework for Automated Decision-Making. The International Red Cross calls for new international rules on how governments use autonomous weapons. Senators introduce two AI bills to improve the US's AI readiness, with the AI Capabilities and Transparency Act and the AI for the Military Act. Defense Secretary Lloyd Austin lays out his vision for the Department of Defense in his first major speech, stressing the importance of emerging technology and rapid increases in computing power. A report from the Allen Institute for AI shows that China is closing in on the US in AI research, expecting to become the leader in the top 1% of most-cited papers in 2023. In research, Ziming Liu and Max Tegmark introduce AI Poincaré, an algorithm that auto-discovers conserved quantities using trajectory data from unknown dynamics systems. Researchers enable a paralyzed man to "text with his thoughts," reaching 16 words per minute. The Stimson Center publishes A New Agenda for US Drone Policy and the Use of Lethal Force. The Onlife Manifesto: Being Human in a Hyperconnected Era, first published in 2015, is available for open access. And Cade Metz publishes Genius Makers, with stories of the pioneers behind AI. Click here to visit our website and explore the links mentioned in the episode.
Rebroadcast: A.I. in the Sky
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI.
Doggone
Andy and Dave discuss the latest in AI news, including a new AI website from the White House at AI.gov, which provides a variety of resources on recent reports, news, key US agencies, and other information. The U.S. Navy destroys a surface vessel using a swarm of drones (in combination with other weapons) for the first time. The NYPD announces the retirement of its Boston Dynamics robot dog (Digidog) due to negative public reaction at its use. The French Defence Ministry releases a report on the Integration of Autonomy into Lethal Weapon Systems. A paper in Digital Medicine examines the use of decision-aids in clinical settings. Matt Ginsberg (along with the Berkeley NLP Group) develops Dr. Fill, an algorithm that won this year's American Crossword Puzzle Tournament, with three total errors. And the University of Glasgow publishes research on using return echoes over time to render a 3D image of an environment. Researchers use MRI and machine learning to identify brain activation configurations for 12 different cognitive tasks. Facebook AI Research, Inria, and Sorbonne publish research on emerging properties of self-supervised vision transformers, which includes the ability to segment objects with no supervision or segmentation-targeted objectives. Florian Jaton publishes The Constitution of Algorithms: Ground-Truthing, Programming, Formulation, which examines how algorithms come to be. Melanie Mitchell publishes a paper on Why AI Is Harder Than We Think. And UneeQ creates a Digital Einstein for people to interact with. Click here to visit our website and explore the links mentioned in the episode.
Superhumans
Andy's out this week, but Dave recently had a chance to do a series of interviews on a paper that he wrote, Superhumans, Implications of genetic engineering and human-centered bioengineering. So this week's podcast will feature a rebroadcast of the interview that Dave had on Titillating Sports. A big thanks to Rick Tittle and Darren Peck from the Sports Byline USA Network for conducting the interview and for allowing us to share it. Rick and Dave discuss the latest and greatest in genetic engineering and human-centered technology and talk about some of the near-term and far-term implications. Report: https://www.cna.org/CNA_files/PDF/Superhumans-Implications-of-Genetic-Engineering-and-Human-Centered-Bioengineering.pdf Titillating Sports Podcast: https://podcasts.apple.com/us/podcast/titillating-sports-with-rick-tittle/id1451555608
Mnemosyne That Before
Andy and Dave discuss the latest AI news and research, including a blog post from the Federal Trade Commission that businesses can and will be held accountable for the fairness of their algorithms. A bipartisan coalition of U.S. Senators has introduced the "Fourth Amendment Is Not for Sale Act," which would ban law enforcement and intelligence agencies from buying data on people in the U.S. and about Americans abroad, if that data was obtained from a user's account or device, through deception, hacking or other violations of privacy policies or terms of service. Bob Work releases his seven Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities; these principles go into much greater detail about employment and provide a useful way to discuss issues surrounding autonomous weapons. The Congressional Research Service provides a short, but dense overview on Lethal Autonomous Weapon Systems. The Ozcan Research Group and UCLA publish research that identifies handwritten numbers by using an optical network made from 3D printed wafers that diffract polarized light. Project CETI aims to decode whale language using decades of recorded whale sounds. Researchers from the Centre for Neuroscience and the Indian Institute of Science explore whether the similarities and differences in how deep networks "see" compared to humans, by examining 13 specific perceptual effects, such as mirror confusion. Researchers from Stanford and UCSD examine how children's drawing and recognition of visual concepts change over time. On a similar topic, other research explores the relationship between episodic memory and generalization, finding that the relationship changes as children develop. The book of the week is an open access paper from Stanford, which examines and provides tools for vector embedding of large sets of data, to include minimizing distortion. Ben Vickers and K. Allado-McDowell publish the Atlas of Anomalous AI, with reference to the Mnemosyne Atlas. Andy and Dave accidentally change the pronunciation of "neh-meh-zeen" and completely destroy the joke of this week's podcast title. And take a look at the "fun" site of the week, which puts an eye on webcams, with the EyeCam, the webcam that looks like and mimics the movements of the human eye. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
Xen and the Art of Motorcell Maintenance
Andy and Dave discuss the latest in AI news, including the European Commission's proposal for the regulation of AI. A report in Nature Medicine examines the limitations of the evaluation process for medical devices using AI that the FDA approves. Researchers at MIT translate spider webs into sounds to explore how spiders might sense their world, and they using machine learning to classify sounds by spider activities. An NIH panel releases its preliminary ethics rules on making brain-like structures such as neural organoids and neural transplants, and finds little evidence that these structures experience humanlike consciousness or pain. And Andy and Dave spend some time with xenobioticists Sam Kriegman and Doug Blackiston, who discuss the motivations and findings behind their latest generation of xenobots, synthetic living machines that they have been designing and building in their labs. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
Donkey Pong
Andy and Dave discuss the latest in AI news, including the National Intelligence Council's 7th Edition Global Trends 2040 Report, which sprinkles the importance of AI and ML throughout future trends. A BuzzFeed report claims that the NYPD has misled the public about its use of the facial recognition tool, Clearview AI, having run over 5100 searches with the tool. European Activist Groups ask the European Commission to ban facial recognition completely, with calls to protect "fundamental rights" in Europe. A report in Digital Medicine examines the diagnostic accuracy of deep learning in medical imaging studies, and calls for an immediate need to develop AI guidelines. Neuralink demonstrates the latest with its brain-computer interface device with a demonstration that shows a monkey playing Pong with his brain. And the Director of the JAIC, Lt Gen Groen, and the co-chair of the NSCAI, Bob Work, spoke for about an hour on the use and direction of AI in the Department of Defense. In research, Andrew Jones examines how different parameters scale with board games, identifying the scaling of scaling laws. Research for AIST, Tokyo Institute of Technology, and Tokyo Denki University demonstrate that they can pre-train a CNN using no natural images, but instead using digital images created using fractals. In the paper of the week, Ben Goertzel provides his general theory of general intelligence. And the fun site of the week features the 1996 game, "Creatures," with a look into the AI that made them come alive. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
Xenomania
Andy and Dave discuss the latest in AI news, including the resignation of Samy Bengio from Google Brain, which fired ethicists Gebru in December and Mitchell in February. The Joint AI Center releases its request for proposals on Data Readiness for AI Development (DRAID). DARPA prepares for the quantum age with a program for Quantum Computer Benchmarking. And a separate DARPA program seeks to enable fully homomorphic encryption with its Data Protection in Virtual Environments (DPRIVE) program. A poll from Hyland on digital distrust shows that Americans think that over the next decade, AI has the most potential to cause harm. Amazon introduces the next level of "biometric consent" required for its delivery drivers, which includes an always-on camera observing the driver and gathering other data; drivers will lose their jobs if they do not consent to the monitoring. And Josh Bongard of the University of Vermont and Michael Levin of Tufts University along with other researchers from Wyss and Harvard join together to form the Institute for Computationally Designed Organisms (ICDO), which will focus on "AI-driven designs of new life forms." In research, Bongard publishes the latest iteration of its mobile living machines, with Xenobots II, using frog cells to create life forms capable of motion, memory, and manipulation of the world around them. Researchers from the universities of Copenhagen, York, and Shanghai use neural cellular automata to grow 3D objects and functional machines within the Minecraft world. And OpenAI Robotics demonstrates the ability for a robotic arm to solve manipulation tasks, including tasks with previously unseen goals and objects, with asymmetric self-play. And the Book / Fun Site of the Week comes from the Special Interest Group on Harry Q. Bovik (SIGBOVIK), which presents "April Fools" research, descriptions of truly absurd, but fascinating, research. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
S4 Ep 25Guise of the Machines
Andy and Dave discuss the latest in AI news, including a report that systematically examined 62 studies on COVID-19 ML methods (from a pool o 2200+ studies), and found that none of the models were of potential clinical use due to methodological flaws or underlying biases. MIT and Amazon identify pervasive label errors in popular ML datasets (such as MNIST, CIFAR, Imagenet) and demonstrate that models may learn systematic patterns of label error in order to improve their accuracy. DARPA's Air Combat Evolution program upgrades its virtual program to include new weapons systems and multiple aircraft, with live Phase 2 tests on schedule for later in 2021. Researchers at the University of Waterloo and Northeastern University publish research working toward self-walking robotic exoskeletons. British researchers add a buccinators (cheek) muscle to robotic busts to better synchronize speech and mouth movements. Russian Promobot is developing hyper-realistic skin for humanoid robots. And Anderson Cooper takes a tour of Boston Dynamics. In research, Leverhulme, Cambridge, Imperial College London, and DeepMind UK publish research on the direct human-AI comparison in the animal-AI environment, using human children ages 6-10 and animal-AI agents across 10 levels of task groupings. Josh Bongard and Michael Levin publish Living Things Are Not (20th Century) Machines, a thought piece on updating how we think of machines and what they *could* be. Professors Jason Jones and Steven Skiena are publishing a running AI Dashboard on Public Opinion of AI. The Australian Department of Defence publishes A Method for Ethical AI in Defence. Raghavendra Gadagkar publishes Experiments in Animal Behavior. And Peter Singer and August Cole publish An Eye for a Storm, envisioning a future of professional military education for the Australian Defence Force. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
The Earth Dies Dreaming
Andy and Dave discuss the latest in AI news, including a letter from the National Transportation Safety Board that asks the National Highway Traffic Safety Administration to regulate more strictly autonomous vehicles and driver assistance technologies; of note, the letter also uses Tesla as an example, stating that the company is using its customers to beta test its full self-driving technology on public roads. KMPG surveys business leaders on a variety of AI-related topics and finds that, among other things, many more leaders have the perception that AI tech is moving out too quickly. Researchers at Aston University announce a three-year study to explore the utility of human brain stem cells grown on a microchip, a so-called Neu-ChiP. Researchers from Norway and Australia unveil DyRET, a quadruped robot that can adapt its morphology (such as growing taller or shorter) as it encounters different environments. And Japanese researchers describe a decoded neurofeedback (DecNef) method, which uses fMRI to visualize brain activity and then calculate the similarity between real-time brain activity and brain activity patterns corresponding to specific pre-established memory and mental states. Microsoft's PowerPoint has a Presenter Coach that will listen and watch your presentation and give you pointers on speech patterns, pacing, attention, body language, and other attributes. The two main research items both involve AI agents playing in the Atari Learning Environment (57 games from Atari's library), and both with groundbreaking results in different ways: Uber AI and OpenAI use a model-free approach in Go-Explore, which uses a concept of "first return (to previous states), and then explore; GoogleAI use a world model approach with DreamerV2, which learns behaviors inside a separately trained world model (they also recommend a "clipped record mean" to aggregate scores across the various games). The survey of the week looks at Deepfakes Generation & Detection. Marjorie McShane and Sergei Nirenburg publish Linguistics for the Age of AI, arguing that researchers must place linguistics front and center for machines to achieve human-level language understanding, with big data and stats approaches as contributing methods. And in the video of the week, Steven Gouveia has produced a documentary on The Age of AI. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
Diplomachine
Andy and Dave discuss the latest in AI news, including the release of the U.S. Navy and Marine Corps Unmanned Campaign Framework, which describes the desired approach to developing and deploying unmanned systems. Google employees demand stronger laws to protect AI researchers, in the wake of the firings of Gebru and Mitchell. Hour One debuted technology that creates fully digital and photorealistic AI personas for the purposes of content creation, such as welcome receptionist or information desk. Pennsylvania state law now allows for autonomous delivery robots to use sidewalks and operate on roads. The U.S. Army announces the availability of a training set for facial recognition that also includes thermal camera images, which it will make available for "valid scientific research." In research, Facebook AI demonstrates an algorithm capable of human-level performance in Diplomacy (no-press), using an equilibrium search to reason about what the other players are reasoning; the algorithm achieved a rank of 23 out of 1,128 human players. Researchers in Helsinki and Germany explore the effects of the Uncanny Valley, suggesting that a robot's appearance changes how humans judge its decisions. The Resource of the Week comes via Pete Skomoroch, who pointed out that Wikipedia contains a massive list of datasets for machine learning research (along with useful summary details about the dataset). The Book of the Week is Telling Stories, with authors from around the globe bringing culturally different perspectives on tales of AI. And the Videos of the Week come from MIT, which has published its Introduction to Deep Learning course online, with free access. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
Datalore SemaFor
Andy and Dave discuss the latest in AI news, including an announcement from Facebook AI that it achieved state of the art computer vision performance with its SEER model, by learning from one billion (with a 'b') random, unlabeled, and uncurated public Instagram images, reaching 84% top-1 accuracy on 13k images from ImageNet. DARPA launches a new Perceptually-enabled Task Guidance (PTG) to help humans perform complex tasks (such as through augmented reality); the effort will include both fundamental research as well as integrated demonstrations. DARPA also announces research teams from its Semantic Forensics (SemaFor) effort at probing media manipulations. Chris Ume, a Belgian visual effects artist, releases four deepfake videos of Tom Cruise, using two NVIDIA GPUs, two months training time, and further days of processing and tweaking for each clip. Researchers at the University of Washington, Berkeley, and Google Research use the StyleGAN2 framework to create "time-travel photography," which peels away the limitations of early cameras to reveal restored images of the original photos; the effort also involves the creation of a modern "sibling," which then gets merged with the original. OpenAI publishes the discovery that neurons in its CLIP network respond to the same concept, whether literal, symbolic (e.g., a sketch) , or conceptual (e.g., text); they also discover an absurdly simple attack, which involves places a stick with a word onto an item. The report of the week from UNICEF looks at Adolescent Perspectives on AI, with insights from 245 adolescents from five countries. Montreal.AI provides a 33-page "cheat sheet" with condensed information and links on AI topics. The book of the week from E-IR examines Remote Warfare: Interdisciplinary Perspectives. And the fun site of the week, MyHeritage, lets users animate photos, or "re-animate your dead loves ones." Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
Schrödinger's Slime Mold
Andy and Dave discuss the latest AI news, which includes lots of new reports, starting with the release of the final report of the National Security Commission on AI, with over 750 pages that outlines steps the U.S. must take to use AI responsibly for national security and defense. The Stanford University Institute for Human-Centered AI (HAI) releases its fourth and most comprehensive report of its AI index, which covers global R&D, technical performance, education, and other topics in AI. Peter Layton at the Defence Research Centre in Australia publishes Fighting AI Battles: Operational Concepts for Future AI-Enabled Wars, with a look at war at sea, land, and air. Drone Wars in the UK and the Centre for War Studies in Denmark release Meaning-Less Human Control: Lessons from Air Defence Systems on Meaningful Human Control for the Debate of AWS, examining automation and autonomy in 28 air defense systems used around the world. And the European Union Agency for Cybersecurity publishes a report on Cybersecurity Challenges in the Uptake of AI in Autonomous Driving. In research, scientists demonstrate that an organism without a nervous systems, slime mold, can encode memory of its environment through the hierarchy of its own tube diameter structure. And the Fun Site of the Week uses GPT-3 to generate classic "title/description/question" thought experiments. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
The Little Ingenuity That Could
Andy and Dave discuss the latest AI news including, Mars landing of the Perseverance and its AI-related capabilities, along with its mini-helicopter, Ingenuity. Researchers from Liverpool use machine learning to predict which mammalian hosts can generate novel coronaviruses. Researchers from Estonia and France create artificial human genomes using generative neural networks. A coalition of over 40 organizations have written a letter to ask that President Biden ban the federal use of and funding of facial recognition technology. The law firm Gibson Dunn releases a 2020 Annual Review of AI and Automated Systems, which also contains a great summary of policy and regulatory developments in the last year. In research, scientists at the Commonwealth Scientific and Industrial Research Organisation in Australia use AI to manipulate human behavior, steering participants toward particular actions. Researchers in the Netherlands demonstrate that predictive coding in recurrent neural networks naturally arises as a consequence of minimizing energy consumption. Research in Nature Communications demonstrates a multisensory neural networks that integrates information from all five human senses. The report of the week comes from CSET author Matthew Mittlelsteadt, which describes AI Verification: Mechanisms to Ensure AI Arms Control Compliance. The first book of the week comes from Moritz Hardt, on Patterns, Predictions, and Actions: A story about machine learning. And the fun site of the week takes a look at the works of painter Wassily Kandinsky, who was also a synesthete (experiencing the fusion of the senses), and offers insights into what he might have heard from looking at his paintings. The second book of the week provides some great information on Synaesthesia – Opinions and Perspectives. Listeners Survey: https://bit.ly/3bqyiHk Click here to visit our website and explore the links mentioned in the episode.
The Low-Res Valley
In AI news, researchers from the University of Copenhagen develop a machine learning model that estimates the chances of risk of death due to COVID at various stages of a hospital stay, including a 80 percent accuracy whether a patient with COVID will require a respirator. The Joint AI Center has a double-announcement, with the Tradewind Initiative, which seeks to develop an acquisition ecosystem to speed the delivery of AI capabilities, and with Blanket Purchase Agreements for AI testing and evaluation services. Kaggle publishes a survey on the 2020 State of Data Science and ML, which examines information from ~2000 data scientists about their jobs and their experiences. PeopleTec releases an "Overhead MNIST," a dataset containing benchmark satellite imagery for 10 categories (parking lots, cars, plans, storage tanks, and others). Epic's Unreal Engine introduces the MetaHuman Creator for release later this year, which purports to create ultra-realistic visuals for virtual human characters; Andy uses the moment to describe the "Uncanny Valley," which the Epic tech might manage to leap out of. And researchers from Carnegie Mellon and George Washington show that, like language transformers, image representations contain human-like biases. In research, researchers at the Israel Institute of Technology create a Ramanujan Machine, which can generate conjectures for mathematical constants, without proof. Researchers demonstrate initial steps of reconstructing video from brain activity. The report of the week examines U.S. public opinion on AI, with views on declining support for development and divided views on facial recognition. DeepMind London approaches the topic of common sense from the viewpoint of animals. And the book of the week comes from the author of the aforementioned paper, Murray Shanahan, and his 2010 book Embodiment and the Inner Life. Listeners Survey: https://bit.ly/3bqyiHk
D.E.R.Y.L.
In news, Andy and Dave discuss a machine learning algorithm from Synergies Intelligent System and Universität Hamburg that can identify people in a moving crowd who are mostly likely asymptomatic carriers of COVID-19. US lawmakers have introduced the Public Health Emergency Privacy Act, to boost privacy protections for COVID-19 technology such as tracing apps and vaccine scheduling apps. A team led by researchers from Oxford have introduced new reporting guidelines to bridge a gap in development to implementation when using clinical AI technologies, dubbed DECIDE-AI. Over 30 authors from a wide swath of organizations have proposed a "living benchmark" to evaluate progress in natural language generation, which they call GEM (Generation, Evaluation, and Metrics). And the combination we saw coming, research from Queen Mary University demonstrate a deep learning framework for detection of emotion using wireless signals. Researchers at the University of Virginia claim to detect physiological responses to racial bias with 76.1% accuracy, though it more focuses on exploring any link between mental associations of skin color. In research, Stanford researchers explore how learning and evolution occur in complex environments, and how they affect the diversity of morphological forms, with DERL (Deep Evolutionary Reinforcement Learning). Researchers from University of Illinois, Urbana-Champaign, introduce GANs for editing images via their latent space, which provides greater control over editing (e.g., editing a mouth without re-generating the entire face). And in the video of the week, a 12-minute video provides a short history on DARPA with highlights on many of its military robot programs. Listener Survey: https://bit.ly/3bqyiHk
Tempus Fluit
In COVID-related AI news, Andy and Dave discuss research from Texas &AM, Wisconsin-Milwaukee, and SNY Binghamton, which demonstrates an automatic system for monitoring the physical distance and face mask wearing of construction workers; demonstrating how surveillance is rapidly becoming a widely available commodity technology. In regular news, the National Security Commission on AI releases its draft final report, which makes sweeping recommendations on AI as a constellation of technologies. The nominee for Deputy Secretary of Defense, Kathleen Hicks, mentions AI and the JAIC at several points during her testimony. The Information Technology & Innovation Foundation releases a report on "Who Is Winning the AI Race," using 30 different metrics to assess nations' progress in AI. Amnesty International launches a campaign against facial recognition, dubbed "Ban the Scan." And Scatter Lab pulls its Korean chatbot Lee Luda, after it started responding with racist and sexist comments to user inputs. In three "quick" research items, researchers at Massachusetts General Hospital and Harvard Medical School show that single neurons can encode information about others' beliefs. Researchers at MIT and the Institute of Science and Technology Austria introduce a new class of time-continuous recurrent neural network models, which they dub liquid time-constant networks; the approach reduces the size of networks by nearly two orders of magnitude for some tasks. And researchers at the University of Toronto, Microsoft Research, and Cornell University show that Maia, a custom version of AlphaZero, can learn to predict human actions, rather than the most likely winning move. The report of the week looks at The Immigration Preferences of Top AI Researchers. And the book of the week contains almost 40 chapters and 60 authors on a variety of special operations-related topics, in Strategic Latency Unleashed. Listener Survey
Sokoban, and Thanks for All the Fish!
Listener Survey In COVID-related AI news, Andy and Dave discuss a machine learning transformer model from Facebook AI and the NYU School of Medicine that uses x-rays to determine whether a COVID patient might need more intensive care. A for-pay report from Synced provides a survey of China's AI efforts in response to COVID-19. In regular AI news, European Parliament members adopt guidelines for military and non-military uses of AI. Meanwhile, the UK Competition and Markets Authority cautions that algorithms can damage online competition and should face regulatory scrutiny. Researchers at NOIRLab use machine learning to identify just over 1200 potential new gravitational lenses. Researchers at Harvard use fish-inspired robots to demonstrate coordinated swarm movements without any outside control. Nature provides reflections from various authors on AI. And the AI Newsletter compiles a list of the 100 most influential people in AI. In research topics, researchers at Cornell demonstrate a curriculum strategy to solve hard Sokoban (the "warehouse man" game) problems, and builds on a pool of sub-tasks. And in a similar, but unrelated effort, researchers at Berkeley and Google Research create a trio of agents to create challenging but feasible environments for the primary agent, the protagonist to navigate; they use an antagonist agent, which tries to create hard environments, while a third agent maximizes the differential between the other two agents, which keeps the tasks just at the edge of the protagonist's ability to solve. An article in the Journal of AI Research demonstrates that containment of a superintelligence is impossible, due to fundamental limits inherent in computing itself. And finally, Chitta Ranjan provides the book of the week, in Understanding Deep Learning: Application to Rare Event Prediction. Click here to visit our website and explore the links mentioned in the episode.
How Machines Judge Humans
Listener Survey In COVID-related AI news, Andy and Dave discuss research that uses NLP to predict mutations in a virus that would allow it to avoid detection by antibodies. In regular AI news, the US Food and Drug Administration publishes an Action Plan for AI and ML, with more to follow. The White House launches the National AI Initiative Office, which will work with the private sector and academia on AI initiatives. The AI Now institute has launched an effort for "A New AI Lexicon," in which it invites contributors to provide perspectives and narratives for describing new vocabulary that adequately reflects demands and concerns related to AI technology. And the Federal Reserve is asking for comments about the use of AI/ML in banking, as it considers increasing oversight of the technologies. In research, Michal Kosinski at Stanford University publishes in Nature Reports how facial recognition technology can identify a person's political orientation (to 72% accuracy); Andy and Dave spend some extra time discussing the challenges and implications behind such applications of facial recognition technology. Researchers at Columbia University demonstrate the ability of an AI observer to "visualize the future plans" of an actor, solely through visual information. The report of the week comes from CNAS on AI and International Stability: Risks and Confidence-Building Measures. The book of the week examines How Humans Judge Machines. And finally, a YouTube documentary from Noclip examines how machine learning plays out in Microsoft's Flight Simulator. Click here to visit our website and explore the links mentioned in the episode.
The Persistence of Memor-E
In COVID-related AI news, Andy and Dave discuss an editorial in The Lancet Digital Health, which examines whether preliminary models add clinical value to health-care systems. In regular AI news, an Italian court rules that the European food delivery app Deliveroo used a "discriminatory" algorithm, potentially opening the door for liability even with unintentional algorithmic discrimination. A study from Google, OpenAI, Apple, Stanford, Berkeley, and Northeastern shows that large language models trained on public data can expose personal information, by making it possible to extract specific pieces of training data. In research, OpenAI combines the mini-GPT algorithm DALL-E with an image-to-text algorithm CLIP, to create an extremely powerful and flexible generative model, capable of generating high-quality images based on text instructions. The report of the week comes from the Connections 2020 Conference proceedings, which examined Representing AI in Wargames. The survey of the week looks at neural network interpretability. Kevin Murphy provides the book of the week, with Probabilistic Machine Learning: An Introduction. And Geoff Hinton speak on Eye on AI with Craig S. Smith about his latest research and the future of AI. Click here to visit our website and explore the links mentioned in the episode.
Always Look on the Bright Side of Life
In COVID-related news, Andy and Dave discuss a commercial AI model from Biocogniv that predicts COVID-19 infection using only blood tests, with a 95% sensitivity and a 49% specificity. In a story that highlights the general challenge with algorithms, Stanford reported challenges in using a rules-based algorithm to determine priority of vaccine distribution, when it omitted front-line doctors from initial distribution. In non-COVID AI news, Vincent Boucher and Gary Marcus organize a second "AI Debate" on the topic of Moving AI Forward: An Interdisciplinary Approach, which included Daniel Kahneman, Christof Koch, Judea Pearl, Fei-Fei Li, Margaret Mitchel, and many others. Reuters reports that Google's PR, policy, and legal teams have been editing AI research papers in order to give them a more positive tone, and to reduce discussions of the potential drawbacks of the technology. And Microsoft patents a "chat bot technology" that would seek to reincarnate deceased people. In research, Google announces MuZero, which masters chess, Go, shogi, and the Atari Learning Environment by planning with a learned model (and no information on the rules). Jeff Heaton provides the book of the week, with Applications of Deep Neural Networks. A survey paper from four universities looks at Data Security for Machine Learning. Another survey paper examines how researchers develop and use datasets for machine learning research. And the ConwayLife.com community celebrates the 50th anniversary of the Game of Life, to include an online simulator called the Exploratorium. Click here to visit our website and explore the links mentioned in the episode.
Pork Rewinds
Just in time for the holidays, Andy and Dave look back and some of the more memorial AI-related stories from 2020. They begin with the passing of mathematician John Conway, creator of The Game of Life, who died in April at 82 from complications due to COVID-19; Andy and Dave will talk more about The Game of Life in next week's podcast. With an example of how not to use AI, in July, the International Baccalaureate Educational Foundation turned to machine learning algorithms to predict student grades, due to COVID-related cancelations of actual testing, much to the frustration of numerous students and parents. Also in July, over 1400 mathematicians signed and delivered a letter to the American Mathematical Society, urging researchers to stop working on predictive-policing algorithms. In September, Elon Musk demonstrated the latest iteration of Neuralink, complete with pig implantees. And finally, Andy and Dave examine the GPT family algorithms with a discussion on GPT-2 and GPT-3. Click here to visit our website and explore the links mentioned in the episode.
The 4-Bit Blopera
In COVID-related AI news, Andy and Dave discuss the results of the C3.ai COVID-19 challenge. In regular AI news, the US Air Force announces an AI, ARTUm, controlling a military plane for the first time. A Nature publication shows the AI collaboration links between institutions based on the last 5 years. The IBM T.J. Watson Research Center publishes research on 4-bit training of deep neural networks to accelerate the process. Researchers at Oregon State University publish advances with a new type of optical sensor that can naturally detect moving objects. And the Naval Surface Warfare Center at Crane along with ONR announce a prize challenge for AI in Small Unit Maneuver (AISUM). In meta-research, researchers create a graph-based toolkit for analysis and comparison of games. Other research examines the fossil records to discover patterns in Earth's biological mass extinction events. In the book of the week, the US Army War College Class of 2020 publishes an Estimation of Technological Convergence. György Buzsáki's The Brain from the Inside Out takes a different look at how the brain functions. And for the holidays, Andy and Dave play around with Google's blob opera singers.
The Final Sunbrawler
Andy and Dave discuss the recent announcement that the U.S. Department of Defense announces that it will adopt the Defense Innovation Board's detailed principles for using AI. The European Commission releases its white paper on AI. The University of Buffalo's AI Institute receives a grant to study gamers' brains in order to build AI military robots. Microsoft announces Turing-NLG, a 17-billion parameter language model. MIT's CSAIL demonstrates TextFooler, which makes synonym-like substitutions of words, the results of which can severely degrade the accuracy of NLP classifiers. Researchers from McAfee show simple tricks to fool Tesla's Mobileye EyeQ3 camera. And Andy and Dave conclude with a discussion with Professor Josh Bongard, from the University of Vermont, on his recent "xenobots" research.
Will You, Won't You Join the DANs?
In COVID-related AI news, Andy and Dave discuss a report from MIT that identifies gaps in coverage from COVID vaccines, and uses machine learning to identify peptide additions to increase their efficacy. The GAO and the National Academy of Medicine release a combined report on AI in health care. Nature provides access to a large collection of open datasets related to COVID research and information. In non-COVID-related AI news, President Trump signs an executive order on the governmental development of AI, which includes a requirement for OMB to produce a roadmap by the end of May 2021. The FY21 National Defense Authorization Act boots the JAIC's role and performance, to include a funding stream for acquisition authority. The ML-Reproducibility Challenge 2020 kicks off, with submissions due by 29 January 2021. Researchers in China announce the creation of a photonic quantum computer that achieves quantum supremacy in conducting Gaussian boson sampling. The Bjarke Ingles Group unveils its plans to create an "AI city," a tech-hub in Chongqing, China. And the Navy's uncrewed Overlord test vessel completes a 4700 nautical mile journey with minimal human assistance, to include passage through the Panama Canal. Researchers at Georgia State University demonstrate an approach to continual learning with deep artificial neurons (DANs), a neural network, where the neurons are themselves small deep neural networks. And researchers at Tencent AI Lab demonstrate an almost society-of-agents approach to creating an a deep reinforcement algorithm that can play multi-player online battle arena (MOBA) games. Click here to visit our website and explore the links mentioned in the episode.
Poetein Folding
In COVID-related AI news, Andy and Dave discuss a Facebook model that provides county-level forecasts on the spread of COVID-19. IN non-COVID AI news, DeepMind's AlphaFold 2 won the 14th biennial Critical Assessment of Structure Prediction (CASP), scoring above 90 on a global distance test for around two-thirds of the test proteins. Partnership on AI establishes The AI Incident Database (AIID) to provide an open-access resource on failures of AI systems, currently containing over 1,000 publically available "incident reports." CSET publishes a report on '"Cool Projects" or "Expanding the Efficiency of the Murderous American War Machine?"' which examines the perspectives of US AI industry professional toward working on Department of Defense funded AI projects. The UN, in conjunction with Trend Micro Research and the European Cybercrime Centre, releases a report on Malicious Uses and Abuses of AI, which highlights the potential physical impacts of hackers on autonomous- and AI-related technologies. And LtGen Michael Groen, the new Director of the Joint AI Center, provides an overview of the JAIC's goals and objectives. In research, NVidia, Rice University, and Caltech publish the BONGARD-LOGO benchmark set, as an expansion of the Bongard Problems, which provide free-form shape concepts to test context-dependent perception, analogy-making perception, and perception with few samples. Joshua C. Gellers provides the book of the week, examining the case for Rights for Robots. And Google AI releases Verse by Verse, which draws upon the writings of various poets to help users generate their own poems, of which Andy and Dave both share examples. Click here to visit our website and explore the links mentioned in the episode.
Underbyte
In COVID-related AI news, Andy and Dave discuss research from MIT, BIM, and Harvard Medical School, which uses machine learning on Reddit posts to track the pandemic's impact on mental health. And the UK and is planning to use AI to spot dangerous side effects in COVID vaccinations. In non-COVID AI news, Andy and Dave take a look at how the AI-based poll predictions faired in the 2020 US election. The White House issues guidance for federal agencies on AI applications. The University of Copenhagen makes Carbontracker available, which provides an estimate of the energy consumption for training deep learning algorithms. DARPA selects 5 teams to head to the next phase of its Air Combat Evolution competition. And the 34th Neural Information Processing Systems (NeurIPS) plans for virtual proceedings in early December. In research, 40 authors from Google publish findings on the challenges of deploying an AI system into the real world, such as unexpectedly poor behavior, which they attribute to underspecification. The Marine Corps University Press releases the second volume of Destination Unknown. Andy's "vintage magazine of the week" is the April 1985 of Byte, which covered Artificial Intelligence. And Matt Stone and Trey Parker introduce Sassy Justice, a parody comedy which warns of the dangers of deepfakes, by itself being a series of deepfakes (including President Trump, Facebook CEO Mark Zuckerberg, former Vice President Al Gore, and many others). Click here to visit our website and explore the links mentioned in the episode.
A.I. in the Sky
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI. Click here to visit our website and explore the links mentioned in the episode.
The Rosetta Drone
In COVID-related AI news, MIT researchers have published a machine learning algorithm that can diagnose COVID-19 by the sounds of a person's forced cough. And the US Veterans Affairs Department rolls out a machine learning tool to predict mortality rates of COVID-19 patients. In non-COVID news, the JAIC releases the Department of Defense's AI Education Strategy, which contains a detailed description of requirements, required instruction, and competencies. DoD also releases a new electromagnetic spectrum strategy, which contains a number of machine-learning mentions. And Tesla began making available its "full self-driving beta" to a small number of "expert and careful drivers." Research from MIT CSAIL have created a machine learning system that can reportedly decipher "lost" languages; they built it on several principles from insights into historical linguistics, such as the observation that languages generally only evolve in certain predictable ways (such as sound substitutions). In other language news, Facebook makes available a machine learning model that can translate directly between 100 different languages (rather than using English as a go-between). Research from CalTech and Purdue creates a "Fourier neural operator" that can solve parametric partial differential equations, nearly 1000 times faster than traditional solvers. And research from the University of Waterloo looks at "less than one-"shot learning, attempting to allow an AI to learn with almost no data (and thus recognize more objectives than the number of examples trained on). Click here to visit our website and explore the links mentioned in the episode.
Thunderbots
Sam Bendett joins Andy and Dave to discuss the latest developments and happenings in Russia's research into artificial intelligence and autonomy capabilities. They discuss Russia's national strategy and the challenges that have occurred in programmatic implementation due to COVID impacts. They also discuss the status of higher education in Russia and the standing of various institutions, as well as their relationship and interaction with the global community of researchers. They cover a variety of other trends and topics, including the Army 2020 convention and some of the announcements made during that event; and they discuss CNA's Russia Program and its on-going series of newsletters dedicated to summarizing the latest in Russian advances and research in AI. Click here to visit our website and explore the links mentioned in the episode.
Lone Hacker and Child
In COVID-related AI news, Andy and Dave discuss the COVID-19 Grand Challenge from C3.ai. In non-COVID AI news, the Department of Defense releases its Data Strategy. The Defense Science Board publishes a report on Counter Autonomy. The National Security Commission on AI releases its 3rd Quarter interim report and recommendations. The Center for Security and Emerging Technology releases a report on Building Trust through Testing. And the US Patent and Trademark Office publishes the responses to its initial queries, in Public Views on AI and Intellectual Property Policy. Researchers from MIT and Berkeley explore the idea that children's learning has analogy to hacking, by making code better through an open-ended set of goals and activities. Nathan Benaich and Ian Hogarth release the State of AI Report 202, which examines the latest developments in AI research across a variety of areas (such as observing that only 15% of papers publish their code). And Taylor and Dorin publish Rise of the Self-Replicators: Early Visions of Machines, AI and Robots that Reproduce and Evolve. Click here to visit our website and explore the links mentioned in the episode.
PROGRESS Out of the Blue
Andy and Dave have a chat with Chad Jenkins, Professor of Computer Science and Engineering at the University of Michigan, Director of the Laboratory for Perception, RObotics, and Grounded REasoning SystemS (PROGRESS), and newest member of CNA's Board of Trustees. They discuss Chad's background and his current research at Michigan, which includes interactive robot systems and human-robot interaction. And then they discuss a variety of topics ranging from movement primitives, neural networks and fat tails, the issue of reinvention, students' experiences with AI research and the role of historical research, the culture of research in AI, and much more. Click here to visit our website and explore the links mentioned in the episode.
The Robohattan Project
The bipartisan Future of Defense Task Force releases its 2020 report, which includes the suggestion of using the Manhattan Project as a model to develop AI technologies. Facebook AI releases Dynabench as a way to dynamically benchmark the performance of machine learning algorithms. Amsterdam and Helsinki launch AI registers that explain how they use algorithms, in an effort to increase transparency. In research, the Allen Institute of AI, University of Washington, and University of North Carolina publish research on X-LXMERT (learning cross-modality encoder representations from transformers), which trains GPT-3 on both text and images, to then generate images from scratch by providing descriptions (e.g., a large clock tower in the middle of a town). Researchers at Swarthmore College and Los Alamos National Labs demonstrate the challenges that neural networks of various sizes have in learning Conway's Game of Life. Maria Jeansson, Claudio Sanna, and Antoine Cully create a stunning visual infographic on the "automated futures" technologies. And the Joshua Epstein, a longtime expert in agent-based modeling, provides the European Social Stimulation Association Award Keynote speech. Click here to visit our website and explore the links mentioned in the episode.
Tell-Tale Heart
In COVID-related AI news, Youyang Gu provides world- and county-level COVID-19 predictions using machine learning, along with a rolling examination of accuracy. In regular AI news, a military coalition of 13 countries meets to discuss the use of and ethics of AI. Orcan Intelligence provides a deeper look into Europeans' concerns about AI technologies. Ben Lee and the Library of Congress unveil the full open version of the Newspaper Navigator, which provides access to 1.56 million photographs from newspapers. Research from Intel and Binghamton University uses the pulse of the beating heart to identify deep fake videos with a 97% accuracy. And Arthur Holland Michel publishes the Black Box, Unlocked: Predictability and Understandability in Military AI. Click here to visit our website and explore the links mentioned in the episode.
the social bot network
Andy and Dave kick off Season 4.0 of AI with AI with a discussion on social media bots. CNA colleagues Meg McBride and Kasey Stricklin join to discuss the results of their recent research efforts, in which they explored the national security implications of social media bots. They describe the types of activities that social media bots engage in (distributing, amplifying, distorting, hijacking, flooding, and fracturing), how these activities might evolve in the near future, the legal frameworks (or lack thereof), and the implications for US special operations forces and the broader national security community. Click here to visit our website and explore the links mentioned in the episode.
CONSORTing with the GPT
In COVID-related AI news, another concerning report, this time in Nature Medicine, found "serious concerns" with 20,000 studies on AI systems in clinical trials, with many reporting only the best-case scenarios; in response, an international consortium has developed CONSORT-AI, reporting guidelines for clinical trials involving AI. In Nature, an open dataset provides a collection and overview of governmental interventions in response to COVID-19. In regular AI news, the DoD wraps up its 2020 AI Symposium. And the White House nominates USMC Maj. Gen. Groen to lead the JAIC. The latest report from the NIST shows that facial recognition technology still struggles to identify people of color. Portland, Oregon passes the toughest ban on facial recognition technology in the US. And The Guardian uses GPT-3 to generate some hype. In research, OpenAI demonstrates the ability to apply transformer-based language models to the task of automated theorem proving. Research from Berkeley, Columbia, and Chicago proposes a new test to measure a text model's multitask accuracy, with 16,000 multiple choice questions across 57 task areas. A report from AI Now takes a look at regulating biometrics, which includes tech such as facial recognition. And the 37th International Conference on Machine Learning makes its proceedings available online. Click here to visit our website and explore the links mentioned in the episode.
[Abstraction Intensifies]
In COVID-related AI news, a report from Cambridge University and University of Manchester examines recent studies on using chest x-rays and CTs scans to detect and diagnose COVID, and finds that only 29 of 168 studies had reproducible results; the report further found that all of the studies had high or unclear risk of bias, such that none of the studies had value for use in clinics. CSET provides an overview of how China has used AI in its COVID-19 response. In non-COVID AI news, a GAO report finds systemic problems with facial recognition technology at U.S. airports. The University College of London provides an overview of AI's use in crime, with deepfakes ranked as the most concerning. Researchers at the University of Warwick and the Alan Turing Institute develop a machine learning algorithm to identify potential planets from astronomy data. And NASA uses an algorithm to predict more accurately when hurricanes will rapidly intensify. In research, MIT, MIT-IBM Watson AI Lab, and Columbia University present a machine learning model to abstract relations in videos about everyday actions. Researchers in the Netherlands demonstrate that (large!) adversarial patches can work for surveillance imagery of military assets on the ground. The UN Interregional Crime and Justice Research Institute releases a Special Collection on AI. Researchers in Germany and Korea provide a view of continual and open-world learning. And Georgia Tech provides the People Map as a way to discover research expertise at an institution. Click here to visit our website and explore the links mentioned in the episode.
Some Pigsel
In COVID-related AI news, Andy and Dave discuss an effort from Google and Harvard to provide county-level forecasts on COVID-19 for hospitals and first responders. The National Library of Medicine, National Center of Biotechnology Information, and NIH provide COVID-19 literature analysis with interesting data analytic and visualization tools. In regular AI news, Elon Musk demonstrates the latest iteration of Neuralink, complete with pig implantees. The UK attempted a prediction system for Most Serious Violence, but found that it had serious flaws. Amazon awards a $500k "Alexa Prize" to Emory University students for their Emora chatbot, which scored a 3.81 average rating across categories. The Bipartisan Policy Center releases two reports on AI. And Russell Kirsch, inventor of the pixel and other groundbreaking technology, passed away on 11 August at the age of 91. In research, three papers tackle the problem of reconstructing 3D (in some cases, 4D) models of locations based on tourist photos taking from different vantage points and at different times: the NeRF (Neural Radiance Fields) model and the Plenoptic model. The Human Rights Watch releases a report summarizing Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control. Springer-Verlag releases yet-another-freebie with An Introduction to Ethics and Robotics in AI. And the Conference on Computer Vision & Pattern Recognition has posted the papers and videos from its June 2020 session.
Rebroadcast: What is AI?
CNA's Center for Autonomy and Artificial Intelligence kicks off its first panel for 2019 with a live recording of AI with AI! Andy and Dave take a step back and look at the broader trends of research and announcements involving AI and machine learning, including: a summary of historical events and issues; the myths and hype, looking at expectations, buzzwords, and reality; hits and misses (and more hype!), and some of the many challenges of why AI is far from a panacea. Click here to visit our website and explore the links mentioned in the episode.
Highway to the Danger Zone
With Season 3 drawing to a close, Andy and Dave decided to focus this discussion entirely on the latest results from DARPA's Air Combat Evolution (ACE) program. On 20 August, DARPA held a contest between 8 competitors, and pitted their AI agents in simulated combat against each other, and against a human pilot (who used a VR system). Heron Systems won the event, beating out the other AI agents, and also not allowing the human pilot to attain a valid targeting solution. Andy and Dave discuss the results, the limitations, and the broader context of these results in light of other research and announcements. Click here to visit our website and explore the links mentioned in the episode.
Elementary, Dear GPT
In COVID-related AI news, Andy and Dave discuss survey from Amazon Web Surveys that examines the current status of Internet of Things applications related to COVID-19, include scenarios that might help to reduce the severity of an outbreak. MIT publishes an combinatorial machine learning method to maximize the coverage of a COVID-19 vaccine. In "quick takes" on research, Andy and Dave discuss research from Microsoft, University of Washington, and UC Irvine, which provides a checklist to help identify bugs in natural language processing algorithms. A paper from Element AI and Stanford examines whether benchmarks for natural language systems actually correspond to how we use those systems. University of Illinois at Urbana-Champaign, Columbia University, and US Army Research Lab introduce GAIA, which processes unstructured and heterogeneous multimedia data and creates a coherent knowledge base, and allows for text queries. Research published in Nature Neuroscience examines the brain connectivity of 130 mammalian species and finds efficiency of information transfer through the brain does not depend on the size or structure of any specific brain. And finally, Andy and Dave spend some time talking about the broader implications of GPT-3, the experiments that people are conducting with it, and how it is not an AGI. Dave concludes with an analogy from Star Trek: the Next Generation, that he gets mostly correct, though he misattributes Geordi La Forge's action to Dr. Pulaski. If only he had a positronic matrix! Click here to visit our website and explore the links mentioned in the episode.
Remember, Remember, the Fakes of November
In COVID-related AI news, Andy and Dave discuss an article from Wired that describes how COVID confounded most predictive models (such as finance). And NIST investigates the effect of face masks on facial recognition software. In regular-AI news, CSET and the Bipartisan Policy Center release a report on "AI and National Security," the first of four "meant to be a roadmap for Washington's future efforts on AI." The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce "Fawkes," a way to "cloak" images and befuddle facial recognition software. In research, OpenAI demonstrates that GPT-2, a generator designed for text, can also generate pixels (instead of words) to fill out 2D pictures. Researchers at Texas A&M, University of S&T of China, and MIT-IBM Watson AI Lab create a 3D adversarial logo to cloak people from facial recognition. And other research explores how the brain rewires when given an additional thumb. CSET publishes a Deepfakes: a Grounded Threat Assessment. And MyHeritage provides a "photo enhancer" that uses machine learning to restore old photos. Click here to visit our website and explore the links mentioned in the episode.
Bots Behaving Badly
In COVID-related AI news, Tencent AI Labs publishes a "machine learning" model that can predict the risk of a coronavirus patient developing severe illness. Unsupervised machine learning on data from the U.K.'s COVID Symptom Tracker, which has more than 4 million users, suggests patients cluster into roughly 6 different symptom types. Amazon Web Services releases its version of a scientific literature search on COVID-19. Aminer.org offers an open access knowledge graph of COVID-19. And "Digital Contact Tracing for Pandemic Response" takes a look at global approaches and results with implementing contact tracing. In regular AI news, the National Security Commission on AI releases its latest quarterly report, with 35 recommendations. The latest Congressional Research Service Report covers Emerging Military Technologies, including AI and LAWS. Facebook rolls out a "bot army" to simulate "bad behavior" on a parallel version of its platform, in an effort to understand and combat online abuse. In research, DeepMind publishes findings on reinforcement learning, with a meta-learning approach that discovers an update rule that includes "what to predict" as well as "how to learn from it." Research from Berkeley, DeepMind, and MIT explores exploration by comparing how children learn with reinforcement learning agents in a unified environment. Military Review publishes an article by Courtney Crosby, which describes a framework for operationalizing AI for algorithmic warfare. DeepMind and University College London examines deep reinforcement learning and its implications for neuroscience. And MIT makes available online a full lecture series by Marvin Minsky on "The Society of Mind." Click here to visit our website and explore the links mentioned in the episode.
Atlas Surveilled
In COVID-related AI news, Andy and Dave discuss research that provides a comprehensive survey on applications of AI in fighting COVID-19. The Stanford Institute for Human-Centered AI and the AI Initiative at the Future Society launch a global alliance: Collective and Augmented Intelligence against COVID-19 (CAIAC). MIT and the IBM Watson AI Lab publish a paper that suggests a computational limit to progress in deep learning. The Atlas of Surveillance provides an open-source look at technologies that law enforcement are using across the US, to include facial recognition and drones. Similarly, Surfshark has compiled information on the status of facial recognition technology around the globe, along with additional useful information. MIT finds systematic shortcomings in the ImageNet dataset, with an observation that the crowdsourcing data collection pipeline can cause "misalignments." Research from Google Brain shows that "self-attention" can allow agents to identify task-critical visual hints, and ignore task-irrelevant elements. UC Berkeley, Google, CMU, and Facebook demonstrate "one policy to rule them all," where they use one global policy to control the movement of a wide variety of agent morphologies (which would normally require training and tuning for each separate morphology). The Army's Cyber Institute releases the "Invisible Force" graphic novel, which examines potential uses of AI technology in a future fictional scenario. Alife 2020 makes a compilation of its July conference available, clocking in a nearly 800 pages. And Gwern examines the creative side of GPT-3 through poetry, humor, and other probing interactions Click here to visit our website and explore the links mentioned in the episode.
Life Is Like a Box of Matrices
Andy and Dave start with COVID-related AI news, and efforts from the Roche Data Science Coalition for UNCOVER (the United Network for COVID-19 Data Exploration and Research), which includes a dataset of a curated collection of over 200 publicly available COVID-19 related datasets; efforts from Akai Kaeru are included. The Biomedical Engineering Society publishes an overview of emerging technologies to combat COVID-19. Zetane Systems uses machine learning to search the DrugVirus database and information from the National Center for Biotechnology to identify existing drugs that might be effective against COVID. And researchers at the Walter Reed Army Institute of Research are using machine learning to narrow down a space of 41 million compounds to identify candidates for further testing. And the IEEE hosted a conference on 9 July, "Does your COVID-19 tracing app follow you forever?" In non-COVID-related AI news, MIT takes offline the TinyImages dataset, due to its inclusion of derogatory terms and images. The second (actually first) wrongful arrest from facial recognition technology (again by the Detroit Police Department) comes to light. Appen Limited releases its annual "State of AI and ML" report, with a look at how businesses are (or aren't) considering AI technologies. Anaconda releases its 2020 State of Data Science survey results. And the International Baccalaureate Educational Foundation turn to machine learning algorithms to predict student grades, due to COVID-related cancelations of actual testing, and much to the frustration of numerous students and parents. Research from the Vector Institute and the University of Toronto tackles analogy and the Raven Progressive Matrices with an ensemble of three neural networks for objects, attributes, and relationships. Researchers at the University of Sydney and the Imperial College London have established CompEngine, a collection of time-series data (over 24,000 initially) from a variety of fields, and have placed them into a common feature space; CompEngine then self-organizes the information based on empirical properties. Garfinkel, Shevtsov, and Guo make Modeling Life available for free. Meanwhile, Russell and Norvig release the not-so-free 4th Edition of AI: A Modern Approach. Lex Fridman interviews Norvig in a video podcast. And the Elias Henriksen creates the Computer Prophet, which generates metaphors from a database of collected sayings. Click here to visit our website and explore the links mentioned in the episode.
A Tesseract to Follow
In COVID-related AI news, Purdue University has built a website that tracks global response to social distancing, by pulling live footage and images from over 30,000 cameras in 100 countries. Simon Fong, Nilanjan Dey, and Jyotismita Chaki have published Artificial Intelligence for Coronavirus Outbreak, which examines AI's contribution to combating COVID-19. Researchers at Harvard and Boston Children's Hospital use a "regular" Bayesian model to identify COVID-19 hotspots over 14 days before they occur. In non-COVID AI news, the acting director of the JAIC announces a shift to enabling joint warfighting operations. The DoD Inspector General releases an Audit of Governance and Protection of DoD AI Data and Technology, which reveals a variety of gaps and weaknesses in AI governance across DoD. Detroit Police Chief James Craig reveals that the police department's experience with facial recognition technology resulted in misidentified people about 96% of the time. Over 1400 mathematicians sign and deliver a letter to the American Mathematical Society, urging researchers to stop working on predictive-policing algorithms. DARPA awards the Meritorious Public Service Medal to Professor Hava Siegelmann for her creation and research in the Lifelong Learning Machines Program. And Horace Barlow, one of the founders of modern visual neuroscience, passed away on 5 July at the age of 98. In research, Udrescu and Tegmark release AI Feynman 2.0, with unsupervised learning of equations of motion by viewing objects in raw and unlabeled video. Researchers at CSAIL, NVidia, and the University of Toronto create the Visual Causal Discovery Network, which learns to recognize underlying dependency structures for simulated fabrics, such as shirts, pants, and towels. In reports, the Montreal AI Ethics Institute publishes its State of AI Ethics. In the video of the week, Max Tegmark discusses the previously mentioned research on equations of motion, and also discusses progress in symbolic regression. And GanBreeder upgrades to ArtBreeder, which can create realistic-looking images from paintings, cartoons, or just about anything. Click here to visit our website and explore the links mentioned in the episode.
Crime & Publishment
It's a week of huge announcements! But first, in COVID-related AI news, Andy and Dave discuss a review paper in Chaos, Solitons, and Fractals that provides a more international focus on the role of AI and ML in COVID research. CSAIL teams with Ava Robotics to design a robot that maneuver between waypoints and disinfect surfaces of warehouses with UV-C light. C3.ai Digital Transformation Institute awards $5.4M to 26 AI researchers for projects related to COVID-19. In non-COVID news, the Association for Computing Machinery calls for the immediate suspension of facial recognition technologies until more mature and reliable. US lawmakers have introduced a bill that would ban police use of facial recognition, while separate bills seek to increase the AI talent available for the Department of Defense, and work to realign and rewire the JAIC within DoD. Over 2300 researchers sign a petition to Springer Nature to reject a publication from Harrisburg University, which developed facial recognition software to predict whether somebody was going to be a criminal. Meanwhile, researchers from Stanford demonstrate the problem of reproducibility by giving a data set of brain scans to 70 different researcher teams; no two teams chose the same workflow to analyze the data, and the final conclusions showed a sizeable variation. In a similar vein, researchers at Duke University examine the historical record of brain scan research and find poor correlation across experiments. In research, the "best paper" for the Conference on Computer Vision and Pattern Recognition goes to a team from Oxford, who use unsupervised learning methods and symmetry to convert single 2D images into 3D models. Researchers at Uber, the University of Toronto, and MIT use 3D simulated worlds to generate synthetic data for training LiDAR systems on self-driving vehicles. Calum MacKellar makes Cyborg Mind available, a look into the future of cyberneuroethics. And Johns Hopkins prepares for a second seminar on Operationalizing AI in Health. Click here to visit our website and explore the links mentioned in the episode.
Dust in the Mind
For COVID-related AI news, Andy and Dave discuss the Stanford Social Innovation Review report on the problem with COVD-19 AI solutions (e.g., data gaps, inconsistency, etc), and how to fix them. The National Endowment for Science Technology and the Arts (NESTA) provides a thorough report on AI and COVID-19, whose findings generally suggest that barriers might exist for the employment of AI in tackling COVID-19. In regular AI news, the US has its first known case of an erroneous arrest due to facial recognition technology, with the arrest of Robert Williams in Detroit in January 2020 (and disclosed on 24 June). The European Commission white paper on AI gets two more responses, from Facebook and from the Center for Data Innovation. Sergei Ivanov provides a breakdown of contributors for the upcoming International Conference on Machine Learning. Researchers have identified a new threat vector against neural networks, one that increases energy consumption and latency. And a follow-up with the Pulse upsampling tool shows a bias toward producing white faces, likely inherited from its training dataset, StyleGAN. In research, Denny Britz examines replicability issues in AI research, and how academic incentive systems are driving the AI research community toward certain types of research. The Marine Corps University Journal turns into the Journal of Advanced Military Studies, and its first issue focuses on innovation and future war. The Combat Studies Institute Press publishes On Strategy: A Primer, including a chapter on future war by Mick Ryan. And Major Nicholas Narbutovskih pens Dust, a story about two warring factions with different approaches to autonomous systems. Click here to visit our website and explore the links mentioned in the episode.