Take, for instance, an AI system designed to defend against human or AI hackers. To prevent the system from doing anything harmful or unethical, it may be necessary to challenge it to explain the logic for a particular action. That logic might be too complex for a person to comprehend, so the researchers suggest having another AI debate the wisdom of the action with the first system, using natural language, while the person observes.
On Monday, Google released a tool called DeepVariant that uses the latest AI techniques to build a more accurate picture of a person’s genome from sequencing data. DeepVariant helps turn high-throughput sequencing readouts into a picture of a full genome. It automatically identifies small insertion and deletion mutations and single-base-pair mutations in sequencing data.
Chinese grandmaster Ke Jie tried to outfox DeepMind’s AI player with some unusual moves, but the computer prevailed with surprises of its own.
Researchers at the University of California, Berkeley, developed an “intrinsic curiosity model” to make their learning algorithm work even when there isn’t a strong feedback signal. The trick may help address a shortcoming of today’s most powerful machine-learning techniques, and it could point to ways of making machines better at solving real-world problems.