HBM147: Chasing Tardigrades

Image by Jeff Emtman. Kaleidoscope collage of moss microscopy photos.

 

With much of the world shut down over the last year, HBM host Jeff Emtman started wondering if there were smaller venues where the world still felt open. 

In this episode, Jeff interviews Chloé Savard of the Instagram microscopy page @tardibabe about the joy of looking at small things, and whether it’s possible to find beauty in things you don’t understand.  

Chloé also gives Jeff instructions for finding tardigrades by soaking moss in water and squeezing out the resulting juice onto slides.

Producer: Jeff Emtman
Music: The Black Spot

 

Jeff’s Microscopy Pics

Student microscope and smartphone. Click to enlarge and read descriptions.

 
 
Pod People.jpg

Sponsor: Pod People

Pod People is an audio production and staffing agency with a community of 1,000+ producers, editors, engineers, sound designers and more.  Pod People is free to join. After a short onboarding process, Pod People will send you clients and work opportunities that are a good match for your specific skills and interests.

Theodora is @hypo_inspo

Image by Jeff Emtman

 

A brief follow-up to last episode: you can now follow our AI-powered friend Theodora on Twitter! She tweets several times a day, giving bad advice, good advice, and some strange poetry. Her account’s called Hypothetical Inspiration. Give her a follow.

 

HBM146: Theodora

Computer generated text projected on a computer generated waves. Image by Jeff Emtman.

 

How does a computer learn to speak with emotion and conviction? 

Language is hard to express as a set of firm rules.  Every language rule seems to have exceptions and the exceptions have exceptions etcetera.  Typical, “if this then that” approaches to language just don’t work.  There’s too much nuance. 

But each generation of algorithms gets closer and closer. Markov chains were invented in the 1800’s and rely on nothing more than basic probabilities.  It’s a simple idea, just look at an input (like a book), and learn the order in which words tend to appear.  With this knowledge, it’s possible to generate new text in the same style of the input, just by looking up the probability of words that are likely to follow each other.  It’s simple and sometimes half decent, but not effective for longer outputs as this approach tends to lack object permanence and generate run-on sentences. Markov models are  used today in predictive text phone keyboards, but can also be used to predict weather, stock prices, etc. 

There’ve been plenty of other approaches to language generation (and plenty of mishaps as well).  A notable example is CleverBot, which chats with humans and heavily references its previous conversations to generate its results.  Cleverbot’s chatting can sometimes be eerily human, perfectly regurgitating slang, internet abbreviations, obscure jokes.  But it’s kind of a sly trick at the end of the day, and, as with Markov chains, Cleverbot’s AI still doesn’t always grasp grammar and object permanence. 

In the last decade or two, there’s been an explosion in the abilities of a different kind of AI, the Artificial Neural Network.  These “neural nets” are modelled off the way that brains work, running stimuli through their “neurons” and reinforcing paths that yield the best results. 

The outputs are chaotic until they are properly “trained.” But as the training reaches its optimal point, a model emerges that can efficiently process incoming data and spit out output that incorporates the same kinds of nuance, strangeness, and imperfection that you expect to see in the natural world.  Like Markov chains, neural nets have a lot of applications outside language too. 

But these neural networks are complicated, like a brain.  So complicated, in fact, that few try to dissect these trained models to see how they’re actually working.  And tracing it backwards is difficult, but not impossible

If we temporarily ignore the real risk that sophisticated AI language models pose for societies attempting to separate truth from fiction these neural net models allow for some interesting possibilities, namely extracting the language style of a large body of text and using that extracted style to generate new text that’s written in the voice of the original text. 

In this episode, Jeff creates an AI and names it “Theodora.”  She’s trained to speak like a presenter giving a Ted Talk.  The result varies from believable to utter absurdity and causes Jeff to reflect on the continued inability of individuals, AI, and large nonprofits to distinguish between good ideas and absolute madness

 

Three bits of raw output from Theodora. These were text files were sent to Google Cloud’s TTS service for voicing.

 

On the creation of Theodora:  Jeff used a variety of free tools to generate Theodora in the episode.  OpenAI’s Generative Pre-trained Transformer 2 (GPT-2) was turned into the Python library GPT2 Simple by Max Woolf, who also created a tutorial demonstrating how to train the model for free using Google Colab.  Jeff used this tutorial to train Theodora on a corpus of about 900 Ted Talk transcripts for 5,000 training steps. Jeff then downloaded the model locally and used JupyterLab (Python) to generate new text.  That text was then sent to Google Cloud’s Text-To-Speech (TTS) service where it was converted to the voice heard on the episode. 

Producer: Jeff Emtman
Music: Liance

 
 

James Li aka. “Liance.” Photo by Alex Kozobolis

This Painting Doesn't Dry album art (4000 x 4000).jpg

Sponsor: Liance

Independent musician James Li has just released This Painting Doesn’t Dry, an album about the relationship between personal experiences and the story of humanity as a whole.

James made this album while he anxiously watched his homeland of Hong Kong fall into political crisis.

HBM145: The Juice Library

Amanda Petrus dressed as Brunhilde among a field of fruit punch.  Image by Jeff Emtman.

Amanda Petrus dressed as Brunhilde among a field of fruit punch. Image by Jeff Emtman.

 

Like so many others, Amanda Petrus got a bit lost after college. She had a chemistry degree and not a lot of direction.  But she was able to find work at a juice factory in the vineyards of western New York.  Her job was quality control, which meant overnight shifts at the factory, tasting endless cups of fruit punch and comparing them to the ever-evolving set of juice standards that they kept in the “juice library.” 

She calls herself and “odd creature”, especially for the time and place: she was a woman working in a factory dominated by men, she was openly lesbian (and yet still rebuffing advances from her coworkers), and she was a lover of Richard Wagner’s—sometimes dressing up as a Valkyrie.

Unfortunately, much of her time at the factory was characterized by the antics of her juice tasting colleague, Tim, who, in some ways, mirrored the traits of her favorite composer.  He was incredibly gifted at understanding the flavor profile of fruit punch, able to predict the exact ratios of passion fruit, high fructose corn syrup, and red 40 needed to please the factory’s  clients.  But he also shared Wagner’s xenophobia and misogyny, with his own brand of paranoia, too.  Often, Amanda was a target of his outbursts

This came to a head when Amanda was suddenly fired and escorted from the factory after Tim levelled an incredible accusation of conspiracy against her. 

After this incident, Amanda got into grad school, and started her path towards teaching.  She is now a professor of chemistry at the Community College of Rhode Island.  She also runs the website Mail From A Cat where you can order mail...from a cat. 

Producer: Jeff Emtman
Music: The Black Spot, Serocell, Ride of the Valkyries (performed by The United States Marine Band),Overture from The Flying Dutchman (performed by University of Chicago Symphony Orchestra), Prelude from Parsifal (recording from the European Archive). 

 
Image courtesy Amanda Petrus.

Image courtesy Amanda Petrus.

 
Esoteric Bumper Stickers.jpg

Esoteric Bumper Stickers sells waterproof vinyl stickers can fit any feeling. Not just for cars, Esoteric Bumper Stickers can show the world your knowledge of the briny deep, your passion for flora, your love of claws in the dark, etc.

Just added $1000 to the resale value of this car.

HBM144: Keeping A Place

Image by Jeff Emtman. Blended photographs taken from the Ballard Bridge and Wayne Tunnel in Bothell, Washington (featuring murals by Kristen Ramirez)

 

HBM Host Jeff Emtman has always been afraid of losing his memories. Places he cares about keep getting torn down.

Forrest Perrine prepares a balloon at the Green Lake Aqua Theater.

In this episode, Jeff bikes around Seattle recording the sounds of a popping balloon to capture the sound of places he likes: Padelford Hall’s Parking Garage, The Wayne Tunnel in Bothell, his old house in Roosevelt, The Greenlake Aqua Theater, and his front porch on a snowy day.  

The sound of a popping balloon can be used to re-create a space digitally.  These popping sounds are loud ‘impulses’, and the space ‘responds’ accordingly.  These impulse responses can then be fed to an audio effect called a “convolution reverb” which interprets the impulse response and applies it to any incoming sound.  

Rick and Kathy Emtman are heard on this episode.  Forrest Perrine helped with some of the recordings.  

Producer: Jeff Emtman
Music: The Black Spot, August Friis, Serocell, Phantom Fauna

 
 
WITW-01.jpg
WITW.jpg

Walk in the Woods is a free mini zine that you can get in the mail!

Creator Flissy Saucier writes and draws about her experiences walking in the woods in this monthly+ publication.

You can donate to keep the project going and get additional benefits.

HBM143: Laughing Rats and Dawn Rituals

Image by Jeff Emtman. Photo of sage grouse by Bob Wick of the Bureau of Land Management. Orange sky elements are the spectrogram of the sound of the mouse courtship call heard in this episode.

Image by Jeff Emtman. Photo of sage grouse by Bob Wick of the Bureau of Land Management. Orange sky elements are the spectrogram of the sound of the mouse courtship call heard in this episode.

 

Animals sometimes make noises that would be impossible to place without context.  In this episode, three types of animal vocalizations—described by the people who recorded them. 

The monkey who lost their mother. Photo by Stephanie Foden.

Ashley Ahearn: Journalist and producer of Grouse, from Birdnote and Boise State Public Radio

Joel Balsam: Journalist and producer of the upcoming podcast Parallel Lives.  Joel co-created a photo essay for ESPN about the “pororoca”, an Amazonian wave chased each year by surfers. 

Kevin Coffey, Ph.D.: Co-creator of DeepSqueak and researcher at VA Puget Sound and the University of Washington.  Kevin co-authored the paper DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations in Nature’s Neuropsychopharmacology journal. 

Also heard: calls of the Indies Short Tailed Cricket (Anurogryllus celerinictus), which may be the perpetrator of the so-called “sonic attacks” recently reported in Cuba.  Sound sent in by HBM listener Isaul in Puerto Rico.  

Producer: Jeff Emtman
Music: The Black Spot

 
Chas Co - Logo.jpg

Sponsor: Chas Co

Chas Co takes care of cats and dogs in Brooklyn (especially in Prospect Lefferts Gardens, Bed Stuy and surrounding neighborhoods). 

Chas Co welcomes pets with special behavioral and medical needs, including those that other services have turned away.  They offer dog walking, cat visiting, and custom care arrangements too. 

Look, it’s Kane!