Visar inlägg med etikett consciousness. Visa alla inlägg
Visar inlägg med etikett consciousness. Visa alla inlägg

fredag 7 mars 2014

Google's expanding their AI Mind

Google is expanding its ambitions in Artificial Intelligence with the acquisition of AI company DeepMind for a reported $400 million.

It’s no secret that Google has an interest in AI; after all, technologies derived from AI research help fuel Google’s core search and advertising businesses. AI also plays a key role in Google’s mobile services, its autonomous cars, and its growing stable of robotics technologies.

With the addition of futurist Ray Kurzweil to its ranks in 2012, Google also has the grandfather of “strong AI” on board, a man who forecasts that intelligent machines may exist by midcentury.

If all this sounds troubling, don’t worry: Google’s acquisition of DeepMind isn’t about fusing a mechanical brain with faster-than-human robots and giving birth to the misanthropic Skynet computer network from the Terminator franchise.

DeepMind's Web site describes the London-based company as "cutting edge" and specializing in combining "the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms." The site says the company's initial commercial applications are simulations, e-commerce, and games.

Google has been particularly focused on advances in artificial intelligence recently. Scientists working on the company's secretive X Labs created a neural network for machine learning by connecting 16,000 computer processors and then unleashed it on the Internet. The network's performance exceeded researchers' expectations, doubling its accuracy rate in identifying objects from a list of 20,000 items.

I wrote about Artificial Intelligence as a part of my article series The Conscious Internet. Do you think the age of Artificial Intelligence is upon us? Comment below...

torsdag 16 januari 2014

RoboEarth is up to Speed


In the midst of my series on the possible future of the Internet, an article from BBC falls in my lap, that really brings reality up to speed. As it turns out parts of my article about the future of robots are very much present.

Scientists at the Eindhoven University, the Netherlands, have created the first ever World Wide Web for robots. Sadly not dubbed "SkyNet" these doom bringers decided to call the system "RoboEarth". Playing straight into the fate of the Termintor series they did however decide to set off the robots by run a hospital room. No better way to take over the earth than to start by killing our the weak and discrepant.

I am exaggerating naturally, but I do find the timing comical. The four robots currently working in the mocked-up (I might ad) hospital are designed to serve drinks, dish out pills, or alert to emergencies. At the core of it all is one central system controlling all the robots.

As I discussed previously, a central problem with artificial intelligence is the inability to come up with new solutions to new problems. A computer will be bound to the strict confinements of its programming, resulting in an inability to learn “new” things. This specific problem is something they are now trying to circumvent by having the central computer learn everything. The aim of the system is to create a kind of ever-changing common brain for robots.

"The problem right now is that robots are often developed specifically for one task," said Rene van de Molengraft, the RoboEarth project leader. "Everyday changes that happen all the time in our environment make all the programmed actions unusable. A task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box."

The system is cloud-based, which in turn means that a lot of the memory capacity can be off-loaded from the individual robots to the central core, allowing for much smaller storage space, and faster processing. A single robot would simply download the script for solving the problem at hand, and then delete said script when the task is completed.

Using this approach, robots are becoming increasingly cheaper to manufacture, something that may result in us having servant robots in our homes in as little as 10 years, experts say. All controlled by the central hive-mind.

I hate to be an “I told you” so but - World, I told you so.

______
What do you think? Are robots taking over the world, and if so are they going to let us have a place? Find out my thoughts in the upcoming conclusion of my series “the Conscious Internet”. And as always; please comment below.

måndag 13 januari 2014

The Conscious Internet - Part VI: The Bright Side of Life

A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 6 of 8. You can find the previous chapter here. Happy reading, and please comment below.


But the real question remains unanswered still; how much intelligence is to be considered intelligent? Looking at the animal kingdom scientific views begin to differ quickly. Some biologists argue that intelligence can be found with all living organisms, while others only recognize the human intelligence.

One accepted definition of intelligence is the ability to draw conclusions to once environment and adapt accordingly, without any previous knowledge. In other words, distinguishing between educated and being intelligent. The two building blocks for this intelligence has been said to be once cognition and the survival instinct. 

A cognitive being will defend itself if attacked in order to ensure the survival of itself, and in the long run also its species. Ponder for a moment the idea of putting the same type of thinking into a program. A simple line of code would make the program copy itself whenever someone tried to delete it, thus escaping doom. Such a built in defense mechanism would in its perfect form create an eternal program, impossible to wipe out, without making it particularly intelligent. 

The answer to this question may not be to establish links between cognition and computers, but rather to introduce computers to cognition. Several movies toy with the idea of being constantly connected, where we are all living out human life on the net. This raises the question of where the cognitive being really exists. Is its existence tied to the physical body, or is it connected to the realm where it perceives its reality? 

Descartes, who probably would be history´s greatest mind when it comes to questions concerning existence, reasoned that mind always ruled body. He would probably argue that existence would be perceived as where the mind would be present, in other words; living (in) the dream.

... continues in Part VII: Flying over the Cuckoo's Nest

torsdag 2 januari 2014

The Conscious Internet - Part V: Deciphering the Logic





A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 5 of 8. You can find the previous chapter here. Happy reading, and please comment below.


Deciphering the Logic
When discussing cognition and artificial intelligence, one does not get far before encountering one of two major theses regulating the subjects; the Turing Test and the Chinese Room Theory. Computer experts around the world usually adhere to one of the two models, spending countless hours trying to prove its principles. 

The Turing Test was formulated in 1950 by the British mathematician Alan M. Turing, as a way of proving or disproving whether a machine was intelligent or not. The test places three different people; a man, a woman, and a interrogator, in separate rooms, with no contact with each other except for a link through which they can send messages - for instance via using a computer. The interrogator then asks the other two questions in order to determine which recipient is who. 

At an unknown point during the test, one of the two being interviewed is replaced by a very clever machine, programmed to answer the interrogator’s questions in the best of human behavior. The question Turing was asking himself was if a machine could be so smart enough that the interrogator would not be able to tell that a switch has been made.

Turing pointed out that the test alone was not created to, nor was it even suitable for, determining whether a program was intelligent or not. He saw the possibilities of a brilliant machine failing the test if its own intelligence differed too much from human intelligence. Or how an ingeniously programmed, but in essence fairly dumb machine, would be able to trick the interrogator into thinking it is still talking to a human counter-part. 

So far no machine has been able to pass the test. The best programs last for about a minute until they finally trip over themselves and repeat a sentence word by word. The program does not only need to be as quick and witty as its flesh and blood counterpart, it also needs to be as dumb and irregular. Some programs have also failed due to the fact that they were too quick, and gave back to accurate information, making the interrogator suspicious. Artificial intelligence also have to balance artificial unpredictability, and artificial stupidity, in order to be truly intelligent.

The Chinese Room Theory formulated in 1980 by John Searle, a professor in philosophy at UC Berkeley (University of California - Berkley), is based on almost the same thinking as the Turing Test. One test subject, who doesn’t know Chinese, is placed alone in a room with only a Chinese instructions manual to keep him company. Messages in Chinese would then be sent into the room, and the test subject is then asked to answer the message using the manual. 

Every message sent into the room would have a corresponding answer, given by the manual, which the test subject then would pass on. Searle argued that regardless of which messages was sent in and out of the room, the person decoding would never gain any knowledge of what was relayed simply because he did not know Chinese. 

The relationship with artificial intelligence is here quite obvious. A computer does not understand the code that is being fed into it, nor does it know how to decipher it, the computer simply knows how to react to the programming. It will therefor never, as a cognitive being would, be able to reserve itself against code it did not like, regardless of how illogical it may be. The computer would in other words jump of the bridge, if only it was programmed to do so.

... continues in Part VI: The Bright Side of Life

torsdag 19 december 2013

The Conscious Internet - Part IV: Running Free


A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 4 of 8. You can find the previous chapter here. Happy reading, and please comment below.


Running Free

The second of November 1988 a young man named Robert T. Morris Jr. wanted to play a prank. He was a graduate student at Cornell University, and in the spirit of school competitiveness, he released a simple program on the Internet, a so called “worm”. The coding was modest; all the program did was to copy itself until the host computer ran out of memory and crashed. With the opus in hand, Morris hacked into rivalry school MIT’s computer network and unshackled his creation. Using a loophole in the operating system Unix´s network settings, Morris was able to bypass security measure and gain access to the core of the computer. Arriving at its destination the program began multiplying itself, blocking up memory, and thereby rendering the computer useless. When it was done with the first computer, Morris had told the worm to move on to the next neighboring computer, and repeat the process. What Morris had not taken into effect however was that in this case the neighboring computer was to be any computer hooked up to the Internet.

The process took less than a second to perform and within a few hours the chaos was evident. When Morris saw his creation escaping out of his control he immediately contacted a friend at Harvard to help him contain the issue. They quickly sent mail to major US servers, cautioning them of the worm and trying to convince them to shut down. But their warnings were already stacked up in the wake of the worm, and the plead was never even delivered. At that moment the worm had already destroyed more than two thousand servers, as well as over ten thousand computers, some of which belonged to NASA, the BRL, and MIT.

It took teams of programmers numerous weeks to sanitize all the affected computers, and the web was left inoperative for several days. What had started as a practical joke amongst two rivaling top universities, quickly escalated to the largest IT-devastation known to man. The total cost for the fabrications was estimated to be more than $53 000, and Morris himself was convicted to three years on probation, 400 hours of community service, and $10 050 in fines. Morris had negligently tripped over Pandora’s Box, and despite of all his programming skills and knowledge, he was unable to close it.

Suppose now that Morris´s worm was designed with another purpose. Instead of crippling the computers it came across, it would simply copy the contents of that computer and information back to a main hub. Morris would then have created a global network, where he personally controlled all the information that passed through it. A central computer system controlling most governing entities in the US; all-knowing and aware of any minute impact to the network – sound familiar? Today, such cataclysmic event may be regarded as far fetched. Improved anti-virus protections, higher encryptions, and better firewalls should prevent this type of disaster from ever occurring again. This is true to an extent; increased security measure has made it increasingly difficult to hack major system, however, at the same time, the algorithms used in viruses have also improved, placing this virtual arms race neck to neck.

... continues in Part V: Deciphering the Logic

torsdag 12 december 2013

The Conscious Internet - Part III: HAL 9000

A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 3 of 8. You can find the previous chapter here. Happy reading, and please comment below.

HAL 9000

Since the dawn of inventions, man has strived to improve and ease the utilization of all his tools and creations, something that holds true also for computers, and for the Internet. Programmers spend hours in front of computer screens, slaving to create more user-friendly interfaces, all in an attempt of mimicking human communication and interaction as closely as possible. This, in combination with a wealth of science fiction movies, gave birth to the expression Artificial Intelligence, which has become a dominating force behind modern day computer development. Making computers smarter, able of understanding and helping the user, is now the cutting edge in modern computer marketing.

In reality the computers are not at all smart, but rather the logic we fill the computers with. Programs can be scripted to learn new actions, thereby inventing its own solutions to problems, but this is not without its own limitations. A computer will always follow the strict set of laws and regulations dictated by its’ code, and therefor lacks the ability to endlessly come up with new angles and strategies. This results in the fact that computers, faced with the simplest of problems, will be incapable of solving a given task simply because it falls outside of their main programming. Man, on the other hand, is an irrational and unpredictable being, capable of creating new and illogical adaptations to most anything. This is why chess-guru Garry Kasparov managed to beat the super-computer Deep Blue, in their first set of match ups. Even though Deep Blue had been programmed with every chess move known to man, and had counter action strategies for all of them, Kasparov’s mind used innovative, previously unseen strategies, that the computer did not know how to counter.

But in the same second Kasparov made his move Deep Blue analyzed it, broke it down into its components, and devised strategies for it, making the move obsolete. The same move could never be used twice by Kasparov, resulting in Deep Blue finally beating the old chess wizard, the second time they met. In his strive to master the computer; Kasparov was in essence just training it.

... continues in Part IV: Running Free

tisdag 10 december 2013

The Conscious Internet - Part II: A New Day Dawns

A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 2 of 8. You can find the previous chapter here. Happy reading, and please comment below.



A New Day Dawns

The step from movie to reality has however, time and time again, been proven to be a giant leap. This is especially true when it comes to science fiction. A walking, talking and fighting robot may be a thing of the future, but the essence of Terminator, the Internet sibling called “SkyNet”, may be closer to us than we think.

The building blocks for our modern day Internet were laid back in the sixties by no other than the US Department of Defense. The branch Advance Research Project Agency, ARPA for short, had since the end of WWII been researching and developing methods to quickly get large amounts of information from the general stab distributed to the front lines. The goal was to be able to relay exact and detailed information such as positions, maps and pictures, without having to risk the necks of countless couriers. The Soviets had already launched the Sputnik so the race for positioning oneself as the dominant monitoring eye in the sky was already lost. The war had to been won on the earth. Instead for linking the stars, the ARPA started experimenting with coupling different computers together, and then trying to run calculations on one computer remotely from the other. So the first computer network was born – ARPANET.

But limitations in contemporary technology halted the developers, and ARPANET barely made it of the drawing board. It would be an additional ten years before the ideas of an internet truly surfaced again. This time it was from the realm of the academic world. Lack of funding moved the ARPANET project from military aspects into the top universities of the US, who breathe life into the thoughts once again. The idea remained the same, to quickly and securely send information from one university to another, but the approach had changed. Scientists from MIT, Stanford and UCLA managed, under close supervision from ARPA, design an operating system capable of sharing the collective memory of the connected servers, a so called “Time-sharing” system. The concept borrowed from the thought behind the vacation homes with the same name, where several tenants collectively owns an apartment and thereby reserves the right to said apartment for a limited period of time. In a similar way, the programs were run on the computers, one at a time, sharing the collective strength of all the connected computers. Instead of having the computer run one program at a time, from start to finish, the new operating system allowed for switching in between programs, and by doing so letting several programs simultaneously share the memory´s total capacity. This allowed, for the first time, for a setup of servers capable of coping with the enormous data stream created by a network of computers, without crashing or processing the information for ages. The modern Internet as we know it was born.

... continues in Part III: HAL 9000

fredag 6 december 2013

The Conscious Internet - Part I: Judgment Day

A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 1 of 8. Happy reading, and please comment below.

Intro:
Over the years the Internet has grown into a custom of our everyday life. It exists today, unchecked, as the largest medium of all information agencies, and has with its power seized an important status in our social scheme. If you’re not on the net, you don’t exist.

Nevertheless it seems as if we are only on the threshold of its entire potential. If we regard the independence of a mind as the origin of cognition, then the Internet emerges as an entirely new entity. Is there a possibility that a conscious Internet could somehow be created, and if so, could it be by our own human negligence? Is the essence of "mind" really shared by all, or is there a danger in assuming that wisdom is esoteric?

Judgment Day

“Three billion human lives ended on August 29th, 1997. The survivors of the nuclear fire called the war Judgment Day. They lived only to face a new nightmare: the war against the machines.”

Many probably recognize the dark prophecies that are presented to humanity in the beginning of the cult movie Terminator II. Man lets technology run amok and the war against our own creations is upon us. The masters have in an instance gone from commanders to prey. The Terminator franchise is built on the premise that the US government builds a colossal network of computers called SkyNet. SkyNet´s main purpose is to replace soldiers on the battlefield, and instead pilot drones to do the fighting, thereby saving humans lives. The network is connected to the entire US defense force, controlling everything from stealth bombers to submarines, in order to perform flawless, unified and inhuman strikes.

But SkyNet is also built as a huge data warehouse for continuous ascertaining and adaption. Along its travels through the web, it collects information from other data bases and storage centrals it passes, relaying the information back to its own mother hub for analyzing. Through this process, SkyNet grows ever larger and more powerful, learning new ways to adapt and solve problems from every corner of the Internet. On the 29th of August 1997 SkyNet circles back around the web, discovering itself and thereby learning about its own existence - SkyNet becomes self-aware. Technicians monitory SkyNet immediately react and try to shut down the main terminal, but it is too late. SkyNet reacts, as any other conscious being would when threatened by termination, and launches a counter attack with all available means by its disposal. Within minutes nuclear warheads rain down on earth, and man’s doom begins.

... continues in Part II: A New Day Dawns