CMSC 150 - Spring 2017 - Blog For Code, By Code
Monday, April 17, 2017
Friday, March 24, 2017
Entry 8 - We Beat the Turing Test... kinda
Remember that movie The Imitation Game? It featured Benedict Cumberbatch as Alan Turing, the English computer scientist known as the father of theoretical computer science and artificial intelligence(1).
The film followed a large part of Alan Turing's life and legacy, starting with his recruitment into MI6 when he was a mathematics alumnus from Cambridge. Turing then, as is so infamously known, became the worlds leading cryptanalyst as him and his team created a device that broke the Nazi enigma, a code believed to be unbreakable. But why was the movie called the The Imitation Game? Perhaps because Alan Turing's most notable discovery was one not featured in the film.
After World War II, when Turing's achievements had been made known world wide, his status and fame skyrocketed, and landed him top positions in research institutes. In the late 1940s, as a leader of the computing laboratory at University of Manchester, Turing wrote the paper "Computing Machinery and Intelligence"(2).
This paper provided the basis for Artificial Intelligence, by proposing a test called "The Imitation Game".
The Turing Test was an Experiment that Alan Turing proposed, where a computer would have to trick 50% of test subjects into thinking they were digitally chatting with a human person, as opposed to a computer.
Competitions to see if anyone can pass the Turing Test with their software are held every year, called The Loebner Prize(4). Many different "chatterboxes" have come close to taking home the prize, but none have come closer than Eugene. Eugene is a Russian program that fooled 33% of individuals into believing it was a 13 year old child, and is now recognized as the most advanced AI in the world, and it is still being improved(3).
Relation to Computing:
Alan Turing had high expectations for computers. He could see the potential that lay within them to the point that they would become fluent in our language and conversation. He saw all of this by looking at just the very basic components that he had to work with.
I believe that his working on the machine that cracked the German Enigma is what gave him such faith in the future of the computer. Once it was programmed, the machine became the cryptanalyst all on it's own. Simply feed it code and it would translate to you the answer.
Alan Turing's impact on Computer Science today is immense, and his work is still studied and praised by even the most incredible programmers of our day.
References:
1. Beavers, Anthony (2013). "Alan Turing: Mathematical Mechanist". In Cooper, S. Barry; van Leeuwen, Jan. Alan Turing: His Work and Impact. Waltham: Elsevier. pp. 481–485. ISBN 978-0-12-386980-7.
2. Alan Turing. (2016, October 21). http://www.biography.com/people/alan-turing-9512017
3. Hern, A. (2014, June 09). What is the Turing test? And are we all doomed now? https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test
4. Artificial Intelligence | The Turing Test. (n.d.). http://www.psych.utoronto.ca/users/reingold/courses/ai/turing.html
The film followed a large part of Alan Turing's life and legacy, starting with his recruitment into MI6 when he was a mathematics alumnus from Cambridge. Turing then, as is so infamously known, became the worlds leading cryptanalyst as him and his team created a device that broke the Nazi enigma, a code believed to be unbreakable. But why was the movie called the The Imitation Game? Perhaps because Alan Turing's most notable discovery was one not featured in the film.
After World War II, when Turing's achievements had been made known world wide, his status and fame skyrocketed, and landed him top positions in research institutes. In the late 1940s, as a leader of the computing laboratory at University of Manchester, Turing wrote the paper "Computing Machinery and Intelligence"(2).
This paper provided the basis for Artificial Intelligence, by proposing a test called "The Imitation Game".
The Turing Test was an Experiment that Alan Turing proposed, where a computer would have to trick 50% of test subjects into thinking they were digitally chatting with a human person, as opposed to a computer.
Competitions to see if anyone can pass the Turing Test with their software are held every year, called The Loebner Prize(4). Many different "chatterboxes" have come close to taking home the prize, but none have come closer than Eugene. Eugene is a Russian program that fooled 33% of individuals into believing it was a 13 year old child, and is now recognized as the most advanced AI in the world, and it is still being improved(3).
Relation to Computing:
Alan Turing had high expectations for computers. He could see the potential that lay within them to the point that they would become fluent in our language and conversation. He saw all of this by looking at just the very basic components that he had to work with.
I believe that his working on the machine that cracked the German Enigma is what gave him such faith in the future of the computer. Once it was programmed, the machine became the cryptanalyst all on it's own. Simply feed it code and it would translate to you the answer.
Alan Turing's impact on Computer Science today is immense, and his work is still studied and praised by even the most incredible programmers of our day.
References:
1. Beavers, Anthony (2013). "Alan Turing: Mathematical Mechanist". In Cooper, S. Barry; van Leeuwen, Jan. Alan Turing: His Work and Impact. Waltham: Elsevier. pp. 481–485. ISBN 978-0-12-386980-7.
2. Alan Turing. (2016, October 21). http://www.biography.com/people/alan-turing-9512017
3. Hern, A. (2014, June 09). What is the Turing test? And are we all doomed now? https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test
4. Artificial Intelligence | The Turing Test. (n.d.). http://www.psych.utoronto.ca/users/reingold/courses/ai/turing.html
Friday, February 24, 2017
Entry 7 - The Personal Assistant
Technologies, before developed, are often thought of as the future of the world. The dreams of one day having portable phones, cars that tell us where to go, or just having a source of light that runs on electricity, rather than gas or candle wax have all been reached, and it’s likely our technological dreams today will one day be achieved as well. No matter what technological advancement you may be dreaming of, it is most likely followed by the thought that it will truly be the future whenever something so incredible can exist.
…except speech recognition…
From the beginning of it’s invention, speech recognition has been treated as the type of project that could never be perfected, and is therefore ridiculously stupid. This is because such a complex task of recognizing human speech and comprehending a string of words takes years upon years to get to even a functioning level, and along the way, any use of that tech would produce poor and often ridiculous results. Yet installation of these recognition devices has been occurring since the very beginning. As a way to be on the “cutting edge” of the tech world, companies put sub-par voice control devices in just about anything they could.
The frustration and laughter that came out of consumers attempting to use these devices has led to them being seen as nothing more than a gimmick.
However, speech recognition has come a long way, so long in fact that it is now considered one of the “technological backbones” of the “internet of things”(1) which is the idea that all things will soon be connected to a greater “hub” equaling higher inter-connectivity. Programs like Siri on the iPhone claim to be so accurate that it acts as a "personal assistant" you, and can perform hundreds of functions like telling jokes, acting as a calculator, and even locating friends. Cars that can talk back to you and tell you the closest places to enjoy meals, as well as set navigation for you have also become almost the point of commonplace.
Relation to Computing:
MIT has recently developed a new chip that reduces the power consumption of voice control by up to 99%. The continued advancement of speech recognition is led by another “backbone”, programming. With thousands and thousands of words, and millions of possible sentence combinations, programs must be written to allow the computer to think for itself, deciding which words are most likely to follow others, using complex algorithms and even from drawing on past user input(2). The computer science behind these devices must be ever-enhancing, working towards the eventual perfection of speech recognition so as not to be considered a joke any more by the whole of consumers. Institutions such as MIT and companies like Intel are leading this charge and are inventing new technology to assist in the process, including a device that will only begin the voice-recognition process when activated by a "wake word"(1).
References:
1. Hardesty, Larry. "Voice Control Everywhere." MIT News. MIT News Office, 13 Feb. 2017. Web.
2. G, R. L. "How Speech-recognition Software Got so Good." The Economist. The Economist Newspaper, 22 Apr. 2014. Web. 24 Feb. 2017
…except speech recognition…
From the beginning of it’s invention, speech recognition has been treated as the type of project that could never be perfected, and is therefore ridiculously stupid. This is because such a complex task of recognizing human speech and comprehending a string of words takes years upon years to get to even a functioning level, and along the way, any use of that tech would produce poor and often ridiculous results. Yet installation of these recognition devices has been occurring since the very beginning. As a way to be on the “cutting edge” of the tech world, companies put sub-par voice control devices in just about anything they could.
The frustration and laughter that came out of consumers attempting to use these devices has led to them being seen as nothing more than a gimmick.
However, speech recognition has come a long way, so long in fact that it is now considered one of the “technological backbones” of the “internet of things”(1) which is the idea that all things will soon be connected to a greater “hub” equaling higher inter-connectivity. Programs like Siri on the iPhone claim to be so accurate that it acts as a "personal assistant" you, and can perform hundreds of functions like telling jokes, acting as a calculator, and even locating friends. Cars that can talk back to you and tell you the closest places to enjoy meals, as well as set navigation for you have also become almost the point of commonplace.
Relation to Computing:
MIT has recently developed a new chip that reduces the power consumption of voice control by up to 99%. The continued advancement of speech recognition is led by another “backbone”, programming. With thousands and thousands of words, and millions of possible sentence combinations, programs must be written to allow the computer to think for itself, deciding which words are most likely to follow others, using complex algorithms and even from drawing on past user input(2). The computer science behind these devices must be ever-enhancing, working towards the eventual perfection of speech recognition so as not to be considered a joke any more by the whole of consumers. Institutions such as MIT and companies like Intel are leading this charge and are inventing new technology to assist in the process, including a device that will only begin the voice-recognition process when activated by a "wake word"(1).
References:
1. Hardesty, Larry. "Voice Control Everywhere." MIT News. MIT News Office, 13 Feb. 2017. Web.
2. G, R. L. "How Speech-recognition Software Got so Good." The Economist. The Economist Newspaper, 22 Apr. 2014. Web. 24 Feb. 2017
Friday, February 3, 2017
Entry 3 - Technology on the Brain... and in it
"True integration with the human body". That's what Jonathan Leblanc, the head of developer evangelism at PayPal has started calling the next phase of technology. A company like PayPal is a perfect example of one that would be of prime hacking potential. Millions of consumers use the website every day to make purchases, and trust that their credit card information is safe, saved behind a password that most likely has been used on every other web account they hold. Because of this, the continual grow in trust of technology must be met with a continual grow in confidentiality measures taken.
Already, bodily passwords like fingerprint identification have hit the scene and have received high praise as they become more and more conventional. But would you believe that these methods are already being considered antiquated?
The move now, is to allow for internal functions to unlock devices, creating "passwords" much more difficult to hack. Leblanc mentions methods like heartbeat, vein recognition, ingestible devices, and even brain implants in article 1.
Another reason to move away from typed, one word protection passwords is because of the rising cost of cryptography. And even as encryption advances, hackers learn at an accelerated rate, making it a constant struggle to stay ahead in password protection.
Zhanpeng Jin, the assistant professor in the Department of Electrical and Computer Engineering at the Thomas J. Watson School of Engineering and Applied Science at Binghamton University, has been using these methods for some time now. "Essentially, the patient's heartbeat is the password to access their electronic health records," (2) he says, speaking about his patients and the intense data encryption his their files are locked behind. The electrocardiograph (ECG) of a patient is taken as a part of their file to check on their condition, but at the same time their signals are used as an encryption device.
Relation to Computing:
This form of password protection is very much so still on the rise, and as the technology gets more and more complicated, the computing behind it does as well, even more so. In examples such as the ECG unlocking method, patients may have a shift in ECG due to factors such as sickness or age, and it is up to computers to work around this. Coding an ingestible object to take power from stomach acid is also a challenge that has yet to be tackled. There are many advancements to be made in this field, but none will be possible without the programs to back them up.
References:
1. "Heartbeat Could Be Used as Password to Access Electronic Health Records." ScienceDaily. ScienceDaily, n.d. Web. 03 Feb. 2017.
2. Mizroch, Amir. "PayPal Wants You to Inject Your Username and Eat Your Password." The Wall Street Journal. Dow Jones & Company, 17 Apr. 2015. Web. 03 Feb. 2017.
Already, bodily passwords like fingerprint identification have hit the scene and have received high praise as they become more and more conventional. But would you believe that these methods are already being considered antiquated?
The move now, is to allow for internal functions to unlock devices, creating "passwords" much more difficult to hack. Leblanc mentions methods like heartbeat, vein recognition, ingestible devices, and even brain implants in article 1.
Another reason to move away from typed, one word protection passwords is because of the rising cost of cryptography. And even as encryption advances, hackers learn at an accelerated rate, making it a constant struggle to stay ahead in password protection.
Zhanpeng Jin, the assistant professor in the Department of Electrical and Computer Engineering at the Thomas J. Watson School of Engineering and Applied Science at Binghamton University, has been using these methods for some time now. "Essentially, the patient's heartbeat is the password to access their electronic health records," (2) he says, speaking about his patients and the intense data encryption his their files are locked behind. The electrocardiograph (ECG) of a patient is taken as a part of their file to check on their condition, but at the same time their signals are used as an encryption device.
Relation to Computing:
This form of password protection is very much so still on the rise, and as the technology gets more and more complicated, the computing behind it does as well, even more so. In examples such as the ECG unlocking method, patients may have a shift in ECG due to factors such as sickness or age, and it is up to computers to work around this. Coding an ingestible object to take power from stomach acid is also a challenge that has yet to be tackled. There are many advancements to be made in this field, but none will be possible without the programs to back them up.
References:
1. "Heartbeat Could Be Used as Password to Access Electronic Health Records." ScienceDaily. ScienceDaily, n.d. Web. 03 Feb. 2017.
2. Mizroch, Amir. "PayPal Wants You to Inject Your Username and Eat Your Password." The Wall Street Journal. Dow Jones & Company, 17 Apr. 2015. Web. 03 Feb. 2017.
Friday, January 20, 2017
Entry 1 - Technology on the Brain... and the Wrist, Feet, Ears, and Eyes
Wearable technology is the name that has been given to a category of technology that has seen a massive increase in consumer sales in the last several years. Although the idea of gadgets being worn on the body has been around since the beginning of the computing age, it was not until the end of the 20th century when technology could be compacted small enough to become feasible for portability and bodily transport(1).
The first commercially successful technology fitted for the body was a device still in frequent use today, the hearing aid. Not long after, in the1980s, the wildly popular calculator watches were introduced(1).
Watch technology went quiet for a long while after this, until a company called Pebble began crowdfunding in 2013 and sold over 1 million smart watches by the end of 2014. Since then, the race for the most complex and best selling smart watch has been on, with massive companies Motorola, Google, and Apple all releasing their own take on the idea. Fitbit, a dedicated fitness tracking band has also taken the wearable tech industry by storm, selling over 38 million devices since 2010(2).
Not every wearable tech device is a major success however. Google’s “Glass” device provided an altered reality to buyers complete with a camera and voice detection. With it’s high price and unaccepted “nerdy” look by the public, the glass project was shut down before even going public(4)(1).
Relation to Computing:
When smartphones originally came out, one of the largest technological races began to create the most advanced phone features while still trying to have the smallest device possible. The competition that erupted from attempts to win over consumers led to some of the largest leaps in technological programming and with it, advanced computing. With this new surge of popularity in wearable technology, a repeat of this race has already begun. In the next couple years, it is forecast that wearable tech will take up over 50% of all technological sales(1)(3). Programming and computing are the skeletons of these devices, and code is the backbone. Programming of heart rate monitors, motion detectors, and micro-cameras are all essential to making these tech pieces tick.
References:
1. "Wearable Technology." Wikipedia. Wikimedia Foundation, n.d. Web. 20 Jan. 2017.
2. "Fitbit." Www.statista.com. N.p., n.d. Web. 20 Jan. 2017.
3. Lamkin, Paul. "Wearable Tech Market To Be Worth $34 Billion By 2020." Forbes. Forbes Magazine, 17 Feb. 2016. Web. 20 Jan. 2017.
4. Bilton, Nick. "Why Google Glass Broke." The New York Times. N.p., 4 Feb. 2015. Web. 20 Jan. 2017.
Subscribe to:
Posts (Atom)