Furby, whats the weather?

This week is rather busy for me, so I decided to do something more lighthearted to escape for a minute the ever present dread of finals week. Here I found some fun ideas for personal projects related to Conversational User Interfaces. Though some substitute conversation for other fun ideas, the idea behind them remains the same. Without further ado, enjoy!

Legend of Alexa:

This is a fun video by Allen Pan on his channel, Sufficiently Advanced. Whats cool about this project is that a) he made a home automation system himself, and b) he made it to be tuned to a musical instrument instead of voice commands. In more simple terms, he made an Alexa that responds to songs from the Legend of Zelda game series. The system detects notes being played and then can determine if the song played matches any of its instructions. This project made me think about what other ways we could interact with things such as voice user interfaces. We already have the capability for hobbyists to create their own VUI and even small applets for some of the major available VUIs. What other ways to interact via sound can you think of?

Voice and Body:

So we already have chatbots and voice user interfaces. Companies have already used them to bring characters to life to interact with the general public. Examples of this include the Disney promotional chatbot, featuring Judy Hopps of Zootopia and an Albert Einstein themed chatbot from National Geographic. Take that idea and apply it to a live animated character and I think we’re going somewhere. Services such as Adobe Character Animator allow for live mouth animation for 2D characters. The system is based on detecting sounds, so in theory a voice user interface would be enough to trigger the animation. If people were engaged by a text conversation with Albert Einstein, boy would kids go crazy over getting to talk to an actual, albeit probably cartoonish, Einstein. Now this would need to be clear that it’s a bot and has limited functionality, but it could be a great way to spice up the science wing at a big city museum. Would you want to talk to a cartoon historical figure?

The Furlexa:

Now the title of my post makes a bit more sense, huh? Well, congratulations, you are cursed and can never take this forbidden knowledge back. Apparently there is a large community of people who hack Furbies, and one committed developer by the name of Zack, from Howchoo, turned one into an Alexa. Basically he replaced the voice of the furby with an Alexa and made the furby move to match the language of the Alexa. While one of the more strange points on here, I think it is a nice segue to talk about housing of the VUI. Currently VUIs are stored in sleek, less noticeable, housing. But would people be more invested in it if we brought in some concepts from Social Robotics? What if we gave Alexa or Siri a face that you could interact with on a more human level? I love the idea of fun little characters helping out your kids with their math homework after school to get them invested in learning. What thoughts do you have about how we should house these unbodied voices?

I hope these fun projects got you thinking about more than just the current standard design of Voice User Interfaces. Finally, if you didn’t think I was crazy yet, heres ANOTHER furby project!

Further (not crazy) Reading:

What are Voice User Interfaces?

Space Trash, and Other Fun Things

A few weeks ago, Russia shot down one of its old satellites from the 80s. By shot down, I mean just blew it up. The pieces of the satellite, plus pieces of the missile they used to do so, are still whizzing around up there. Astronauts on the ISS had to shelter in escape pods as they made sure that none of the debris from this recent ‘decommissioning’ shot pieces around 17,000 miles per hour. Also, theres thousands of these pieces. For reference, thats roughly the speed a bullet travels, and some of these pieces of space debris are much bigger than a bullet. If those pieces hit another satellite, the same thing could happen, just sans the missile. So more space debris could just mean more exploding satellites and even more debris. This is a good example of some problems we will be facing soon as more and more space debris is shot into, and shot while in, orbit. This problem of cascading effects of space debris is commonly known as Kessler Syndrome.

On NASA’s website, they define Kessler Syndrome as

“Spent rockets, satellites and other space trash have accumulated in orbit increasing the likelihood of collision with other debris. Unfortunately, collisions create more debris creating a runaway chain reaction of collisions and more debris known as the Kessler Syndrome after the man who first proposed the issue, Donald Kessler. It is also known as collisional cascading”

-NASA.Gov

This cascade of space debris could cause many more satellites to be destroyed, and worst case, prohibit us from leaving the planet. Not only would this shut down GPS and Weather Forecasting, but many other services we depend on in our daily lives. So instead of just being dour about the whole problem, why not see whats being done to alleviate the issue, shall we?

A visualization of satellites from the European Space Agency

Some of the current ideas around cleaning space debris include things like crafts with propulsion nets, or robotic arms to catch satellites when they are decommissioned so as to avoid them being hit and thus exploding into more debris. This would allow more effective control to bring them into orbit and have them destroyed properly. The most concerning damage is the seemingly random trajectories of space debris which can affect other equipment in orbit. Controlled crashes are a better way to get rid of space junk and collect it when it is back on Earth. The European Space Agency has some great videos regarding missions planned to retrieve space junk, in as early as five years, as well as how engineers can begin to plan for satellites to be sustainable in space. Sustainability in this context means, how will we get it down when its retired so it wont cause more problems later? You can find the ESA’s posts and videos here.

If you would like to read more about the Russian Satellite news, here is a link to the article Vox did covering the event.

IoT, and the use of Planned Obsolescence

I think we can all agree that planning for your device to be quickly thrown away and replaced is a scummy tactic of companies to generate sales. Its understandable if a part isn’t as sturdy and would require a huge workaround if not used, but to fully plan a device or system that you know is going to be irrelevant by a date set by the company producing it is bad faith. Unfortunately that is most likely going to be the case with some of, if not most, smart devices. Companies love to slap ‘smart’ on something without thoroughly figuring out all the nuances of making it a reality. This week I read an article where the Tado thermostat the author installed, freaked out when not connected to the internet and set the house to ninety five degrees. The developers had assumed the device would always be connected and thus there was a major bug in the system, which according to the author of the article, only ceased when they disconnected their boiler.

The Tado wireless thermostat, from their website

But how does a thermostat going out of control tie into planned obsolescence. Well here is the time I argue FOR it. What happens if Tado suddenly went under and can’t host their servers anymore? Thats a lot of potentially dangerous malfunctions that could cook people alive in their homes. Companies need to look at all aspects when designing and implementing IoT devices. We have become engrossed with live service technology and sometimes those services go offline. Major weather events are getting more common as the environment becomes more of a pressing issue. Hurricanes and tropical storms may knock out internet and or power and, instead of being the least problematic thing that could happen, it could affect all smart devices in your house. It feels like in a bid for products to come out faster, design considerations get thrown out the window. Here is a great article I found outlining design considerations for IoT devices. Hopefully better realized products can mitigate the low value, high price, of IoT as it stands today.

Science Through Fantasy

Recently I went to one of the many Museums of Science. It was my local one so not too much trouble getting there. My friend and I had a great time perusing through the various exhibits they had. All the usual stuff engineering, electricity, taxidermy, paleontology. Earlier I had convinced my friend to get tickets to an ‘Omnimax’ movie. I used to go to them all the time as a kid. Theres this huge dome instead of a regular sized movie screen, and it’s tilted in front of you unlike a planetarium. We watched a movie about an Environmental Scientist learning about climate change through rocks, sounds like the next Marvel movie right? Well instead of a six dollar nap, this movie was a bunch of fun. It all lies in how they presented the information.

Now theres a reason that the screen is a massive dome instead of a regular movie sized sheet, immersion. The scientists learning about pre historic climate change were diving into underwater caves to extract samples. This huge screen was, in a sense, an early version of a VR headset. You could see the film in almost any direction, and you were at the center of the action. YOU were diving three hundred feet below the Earth’s surface, in a forest of crystalized stalactites. So that of course got me thinking. How could we immerse people more to let them learn and strike their curiosity.

Enter one of my other obscure interests, speculative evolution. In the genre of Speculative Evolution, artists will imagine some weird, unearthly, environment, and try to imagine how creatures would evolve in that given habitat. Some people just do it for fun but others do their research and explain their thought process behind how they came up with the grotesque abomination that is a frog with a six pack. The genre was started by Dougal Dixon’s After Man, A Zoology of the Future. In it, Dixon examines what modern day creatures would become if humans just up and disappeared and we let them do their thing for 50 million years. The rumor mill has it that at one point a museum had actually started working on an exhibit featuring creatures from Dixon’s book. While that never came to be, imagine if it could.

Imagine a small child walking into a museum to see all the dinosaur bones and stuffed figures of modern animals. They stare at the insects, pinned neatly in their windows of biology. Then the child hears a muffled bellow. Peaking down the aisle they see a beast never before seen, and it’s moving. Animatronics are not a new concept in museums and themed attractions. Entertainers have been using them for ages at this point. The problem with animatronics is that they’re costly to build and maintain, and if you make a dinosaur they’ll likely discover your animatronic was wrong a week after its revealed. Likewise, animated animals lack that interact-ability that could happen with an animatronic. Would a child be more likely to remember their biology lesson if they got to pet a bison? Engaging all the senses usually isn’t a bad thing, but does it justify the cost?

Using social robots to help children learn about the natural world is a fun concept to think about, but the places that would host such creatures would rarely have the funding to be able to produce one. Each animatronic is likely one of a kind, so the expense is much greater. What are your thoughts on this, do you think engaging kids will help them retain their lessons better, or is all of this talk about fun experiences at a museum just bunk? Personally, I love the idea of Edutainment on paper but it gets tricky when you try to implement it.

Picky Noses Can't Be Choosers, Or Can They?

Recently I’ve seen a boom in scent based experiences. They have been relatively spread out among different products or systems, and even through time. Scent is not a new invention by any means, but I have been noticing its implementation more and more. I want to start with a quick example. You may or may not be familiar with the tricks Disney uses in their theme parks to help elevate the experience, but they also incorporate many smells. Most places who talk about the ‘smellitizers,’ such as this article from Insider, only mention the ambient smells. But Disney also purportedly uses smells to push sales of different items. The common folk tales usually involves the smell of fresh, hot popcorn to entice the average park goer to help Disney fill their quota of popcorn sales. Both instances I’ve talked about the use of smell to enhance or direct an experience. I’ve even found that you can buy the ambient smells of Disney, in a convenient candle form!

What does Animation smell like? Well you can find out for only 18 USD #NotSponsored

Candles are a big way of introducing smells into systems. While I just mentioned the Disney line, that same company has other properties like Harry Potter’s Butterbeer, and I’ve seen various candles around the theme of even Dungeons and Dragons. Want your room to smell like you’re fighting a dragon, you can buy it! Another great source of smell activation is in diffusers. These cheap and lightweight smell projectors have made filling a room with scent very easy. One way I wasn’t expecting it to be used was in a Horror Themed, Live Action Roleplaying Game. In the game the players take on the role of paranormal investigators and have to uncover clues about murders on the farm where the game occurs. The game staff will do lots of fun things throughout the weekend such as replace art/furniture, play audio or use projections to both present clues and scare the players, and pump smells into different areas to give players certain understanding of their scenario. When investigating about a specific character in the game, the game administrators will use diffusers to fill the area with lavender scent. Not only does it make the room smell nice, it tells the players subconsciously that they are finding information relating to a specific character. If you are interested in finding out more about that game here is the video from Mo Mo O’Brian, who documents immersive experiences all over the world.

So we the sudden interest in smells? Well dear reader, recently I was listening to a talk about alarm systems and how they are developed. The thing that caught my eye, ironic given the context, was smell. One of the types of alarms was Olfactory. While most would gravitate toward the smell of natural gas as a warning sign, I started thinking about how you could use smells in different contexts. Surprisingly, someone had already come up with an alarm clock designed to use smells as the alarm! It’s called the Sensorwake Trio and it is designed to activate all of your senses to help wake you up.

It reached funding on both Kickstarter and IndieGoGo

While I have never used this alarm clock, people seem to be excited about it. Google said it was one of the top 15 inventions that could change the world. While I don’t think that panned out as Google predicted it would in 2017, it’s still a neat device. Will it change the world? Well personally, I think it would have done so by now. But this is still a use of smell as an active alarm. I think theres a lot of cool things that we can do with smell, especially since its now easily transferred thanks to things like candles and diffusers. What do you think, will smell revolutionize the standards of alarm technology in the near or distant future? Is there a cool scent based technology you know about that I didn’t include? Please let me know!

Additional Readings!

Science of Disney: Smellitizers

Digital Scent Technology: Wikipedia

Transhumanism: Triumph or Terror

Long time followers of this blog may remember the first time I posted, like two months ago. In the opening line from that article I said I was terrified. Boy howdy, does reading about technological possibilities not help that anxiety. Caffeine has also probably not helped. Anyway, this week I decided to read a bit about Transhumanism. This is the idea of augmenting the human body through scientific achievement. Transhumanism could take the shape of implanted system in the body or advanced pharmaceuticals. In the article I read by Alexander Thomas, he mentioned performance enhancing drugs. Specifically, in the context of an educational setting. They could help maintain focus so you could study harder, did one specifically come to mind? Well I sure came up with one which reportedly already does that. Would it not be in the student’s best interest to take that drug to get ahead of their peers? We live in a strange world where we are so competitive that it gets in the way of our human relations. How many humanitarian crises have you heard about recently, how many have you not heard? To those reading this, do you feel this? Maybe I’m just a Doomer who revels in the thought of despair, or maybe I just tend to run into articles which have a Doomer disposition. The reason to this talk of doom and gloom, which I alluded to earlier, is the thought of that competitiveness increasing through Transhumanism. The initial releases of these products will undoubtedly be massively expensive, but if they increase your performance shouldn’t you buy? Thomas argues that this buying power of the new ‘Transhuman Technology’ will work to separate the haves and the have nots even more. By outcompeting the poor, the rich would have all the wealth and influence. While an extreme thought experiment, we can see the wealth gap steadily increasing. This would surely leave a lot of people in a state of near worthlessness. That worthlessness would increase the suffering of those who didn’t have the means to augment themselves. Thomas pulls a quote from David Pearce which ends like this

only hi-tech solutions can ever eradicate suffering from the world. Compassion alone is not enough.”

But where does technology and compassion intersect? This is based on the idea that we can use technology to be compassionate. To truly eradicate suffering we would theoretically need all humans to become Transhuman, but what about those who can’t afford it? Would those who could be generous enough to foot the bill? We already can’t feed everyone properly and yet can output enough food. Compassion alone might not be the answer but surely technology alone isn’t enough either.

Now I can’t say it’s time to break out the pitchforks but I am concerned. If we continue to automate jobs faster than we can create new ones, a lot of people won’t be able to buy this competitive edge that is Transhumanism. We already seem to enjoy dystopia enough with Hunger Games being a cultural touchstone and Squid Game recently rocketing right to the top of everyone’s Netflix queue. While everyone starts getting in shape for whatever bloodsport becomes minimum wage, I think I will take up the fiddle.

HOWS MY TYPING? I want to know, are my articles boring? Are they too depressing? Does it seem like I think I’m some big shot when my opinion has as little merit as some random person on Reddit? I want to know what you think. Maybe next week I can try something positive.

Am I picking up the right signals?

I don’t know if you have noticed as I have, but recently it feels more and more like my car knows more about music than I do. Now granted thats not a very high bar to vault, but it still hurts. Each time I unlock the car, pop the trunk, leave a door open, theres a different little jingle that plays. Its been interesting going from the old days where the only noise was the ker-thunk of my childhood self slamming the door to our minivan. Now theres a whole ORCHESTRA in my car. How do the manufacturers know that people won’t misinterpret those fun noises? Well its all in SDT. Signal Detection Theory is how they test those notifications so the driver doesn’t misinterpret them. A good composer can make a symphony take the listener down a melodic journey, and now SO CAN YOUR CAR. You can make the ‘good’ alerts more pleasing to hear. Leaving the door open? No worries, I’ll just play some smooth jazz to let you know. However if something gets dangerous, then the conductor can signal the canons. Its important to be able to distinguish between which noises coming from your vehicle are just there to let you know about something and which ones are warning you about imminent danger. Signal Detection theory gives us the grounds to be able to reliably test which sounds perform better for which tasks. I know I have been sitting in a zoom call only to hear that little ding and wonder if someone joined or left the meeting. That quick decision could be life or death on the road. I guess Zoom didn’t do their proper research! Anyway, I thought I’d talk with you a bit about a fun topic I found out about while looking into Signal Detection Theory. Let me know what you think could be a cool use of SDT!

Signal Detection Experiment

What I Did 

For this experiment I ran a series of tests to determine how effectively I could tell apart two sets of stars and periods. There were two different examples, the A example had about 46 or less stars and the B example had 56 or more on average. I had to quickly assess which option was presented to me without counting the stars to periods ratio.

Screen Shot 2021-10-10 at 10.49.28 PM.png

For the first round I did a series of three tests, one in which the two options would show up an equal amount, and two others where one option would be presented in a three to one ratio to the other. The fourth and final test I changed the visual distinctness between the two example types. I did this test myself, because it is a learning process. Ideally I would have recruited someone to take it and analyse their data, but I am still learning and wanted to focus on the analysis rather than recruitment. All of these tests were done in a typical working environment. In this case that means a quiet place with few distractions. This is done to simulate the participants average working environment. As I am the participant, I used my actual working environment. 


The objective of this experiment is to determine what would make the two types of examples visually distinct enough to be easily identifiable. The design of the experiment is largely the same as the Mueller and Weidemann Dot Classification test. I wanted to use this test to learn more about how distinct visual items need to be for graphical interfaces. In the field of User Experience, it is extremely important that the user be able to tell what an interface does and that they can distinguish the functions of different parts of the interface.


What I Found

For test 3:1, there were three times as many type A examples. For test 1:3, there were three times as many type B examples.

For test 3:1, there were three times as many type A examples. For test 1:3, there were three times as many type B examples.

Here are the data for the first three trials I ran for this experiment. We can learn valuable information from dissecting them. In this data I compare how many times I was presented with A or B, and then according to that, how many times I selected either A or B. These confusion matrices show that the general accuracy was close for all three of these trials. An interesting thing to note is that, during the 3:1 and 1:3 trial, I was more likely to choose the type of example which showed up more, even if it was wrong. We can infer from this that changing the base rate of the experiment biased the participant, should they have known this info, to select for the type with the higher rate. The sensitivity of the experiment remained largely similar through the initial three experiments.

For the fourth experiment, I returned to the 50:50 base rate for showing both type A and type B. However I diversified each to look less similar. Type A in the first three experiments had on average 46 stars, and type B had 56. I pushed the bounds of these to 40 stars for type A and 60 stars for type B. What happened is you can clearly see that it was much easier to distinguish between the two types presented. The sensitivity of the experiment was almost triple that from the previous three. This experiment could be tuned even more by reigning in one of the two variables I adjusted to see how close they could be while still being visually distinct.

Whats In Store?

I’m sure we have all been to a website which had a fake advertisement masquerading as the button you actually needed to click. Or have you ever wondered if that noise was Zoom telling you someone joined or left the call? These are all signals in use. I find the area of automotive acoustics of particular interest. Recently I have noticed a large amount of audio feedback coming from cars. When the door is open, the trunk unlocks, or if a car in front of you is suddenly stopping for example. This tiny orchestra of experience can easily get jumbled and give the user a false idea of what is happening. Currently composers can elicit different emotions with different short sounds from the vehicle to indicate if that sound is good or bad. I’m sure no small amount of signal detection theory was used to determine whether the drivers understood A, if those were good noises or bad noises, and B, what those noises meant for the driver.

Automation

Hello everyone, I am terrified. It is the month of October and so far the scariest thing I have dealt with is the looming spectre of mass automation in the workforce. To the uninitiated it can seem like a daunting challenge. I don’t want to go and say I know a lot about the state of automation, because I only started reading about it seriously recently. And sure, there are some good things that could come from automation taking up the boring, monotonous, banal jobs. Unfortunately automation is a much bigger subject than just one person with a blog could try and tackle, much to the chagrin of anyone on Medium.

We can pretty accurately predict which jobs are going to become automated in the near future, which could give some people time to adjust before that happens. Here is an article by Upstack which covers which areas are likely to be automated and why. These are all relatively simple, repetitive jobs which can be coded. The jobs that are less likely to be automated are the more complex, nuanced, intuitive jobs. Those are also a lot harder to train for than the jobs which are being replaced. These more difficult jobs can in theory, be more fulfilling for people to flex their creative problem solving skills and do complicated work. In one of the brighter articles I read, Tannya D. Jajal of Awecademy argues that this new era of automation will lead to a rise in Meaningful Work. Jajal argues, based off of Ray Kurzweil’s book “The Singularity is Near,” that people will no longer be reliant on menial labor to get through life. They can change careers into something more fulfilling to themselves and everything will be better. What isn’t given in the article are any stats about how that will happen.

The bleakest article in contrast, author Scott Santens suggests that we will hit a point where there will be far more people than jobs available due to automation. Santens backs up his ideas by showing trends in industries such as oil, where there was a major decline forcing oil companies to upgrade their rigs. This left a huge gap in their workforce that they didn’t need to hire back, even after they had recovered. He touts the figure that 220,000 jobs may have been lost forever. Will complex, meaningful work be able to replace that many jobs? I don’t know. I DO know that Santens shows his hand towards the end, by putting out the idea of Universal Basic Income. Personally I don’t want to get into that can of worms, but it could easily be why he makes such a Nostrademian prediction toward automation. He uses it as a platform to push the idea of UBI because so many people will be out of work. That could clearly point to him not wanting to include or easily overlooking job growth in order to make his point.

All in all, automation can be a scary idea. Permanent job loss can become a major problem if we let it. However Santens does make the point that, as of 2017, the conversation regarding automation had just started gaining leverage. Now we are more conscious about it, not completely but it is growing. I want to leave you with a bright idea before I go. The last article I read this week spoke about how automation, in conjunction with people, can make education and learning more efficient. In the article by Rebecca Sealfon, she talks about all of the simple menial stuff teachers have to do to run a classroom. You might be familiar with Scantron sheets to help grade tests faster, but theres more that can be done. Sealfon offers suggestions which could then allow the teacher to spend more time with the students, nurturing their sense of learning.

Well what do you think, are we doomed? Am I overreacting (someone on the internet overreacting, whaaat)? Is there a reason I shouldn’t be scared? Let me know, I’m excited to hear about it.

Hello World

Please come in, feel free to take off your jacket and stay awhile. I’m Steve by the way. I don’t have a great story to tell, but I plan to one day. For now I just want to take my time and learn so that I can compose something worth telling. Graduate school was always an exciting thought for me, but I figured I would take some time in industry before I went back. After things shut down I figured then that it would be nice to have some stability back in my life. But I’m not just here for some stability, I’m here to learn and grow.

There’s an old adage about knowing what you don’t know. That which lies between hearing a name and the deep well of information it entails. I know I don’t know a lot. That’s why I like research, because I get to find out. There’s so many topics I would be interested in learning about. Over the pandemic I was livestreaming every Friday with my friends doing improv. Unfortunately that didn’t work so well over zoom. So we had to, wait for it, IMPROVISE. We learned how to make our show look good by adding in scenes and imagery and animations, which weren’t the primary use of the streaming software we used. That improved the look of our stream greatly and it made things more fun. So how could we improve engagement in the stream more? How do other streamers improve their engagement? What crazy tech have other streamers come up with? A few names such as SushiDragon and DandiDoesIt stick out in my mind for having wild tech in their streams. They have automated animations which play when viewers in the chat do specific actions. Those two specifically are more dance related, but what could you do with an educational stream? These are all questions I want to explore as live streaming grows as an art form. Personal interests aside I am interested in research and design of spaces and educational technology.

Digging a bit further into the two examples from streaming I mentioned earlier, there are some interesting things they do. For one, SushiDragon has some crazy things that he can do. He has motion tracking cameras to make sure none of the action gets missed, but also allows viewers to interact with the stream by playing animations. The viewer can either donate money or spend ‘Channel Points’ (twitch’s free currency which you can accrue by watching streams) to activate these special animations which in effect put them in contact with the streamer. In this instance the streamer, SushiDragon, has automated interaction with his viewers which allows him to focus on what he is doing, while also allowing the viewer to feel like they are effecting the stream in a non harmful way. It is an interesting interaction which essentially mimics human connection via an automated process. While that may be more ‘fun’ for stream, there has been this issue of parasocial relationships on twitch particularly. Does this automation make that situation worse? That could definitely be argued. Automation can be a dangerous tool which takes advantage of people if we are not careful. I highly doubt the intent of this fun interaction was to create addiction for the stream, but it could be compounding the issue. Who knows, it could be entirely harmless. Anyway I hope you can forgive my stream of consciousness, and thank you for reading.