Computer Brain

Computing your Brain,,,

Archive for the tag “science”

ETICA European Commission – Ethical Evaluation – Seventh Framework Prgramme

EU Etica – Ethical Evaluation

 Evaluation

Ethics in Science and New Technologies

Because ICT implants in the human body go along with the tendency to commercialize the human body and treat humans as objects or as biomechanical platform, implants are considered as a potential threat to human dignity in some contexts.

This document deals with the evaluation of the ―ethical analysis carried out in WP 2 ―with the aid of the overview of computer and information ethics and biometrical analysis.

Our approach is based on official documents on the European level as suggested in the ―Description of Work allowing a comparison between the ethical issues addressed in academic research and the issues likely to be addressed at the level of the European Union.

Among these core values of European institutions we highlight for instance: human dignity, freedom (which includes autonomy, responsibility, persuasion and coercion, informed consent), freedom of research, privacy, justice (which includes: autonomy, consumer protection, cultural diversity, environmental protection, safety, ownership, social inclusion). We also take into consideration the principle of proportionality, the precautionary principle and the principle of transparency as key principles of an ―Ethics of European Institutions.


Since the focus of the ETICA project is on research founded within the FP7 programme, one may assume that no such conflicts could be identified. However, conflicts may only arise in certain areas of applications, or while issues may arise they may not be regarded as serious enough to exclude the respective research. Also, it has to be noted that Ethics in FP7 concentrates on the research process. Control mechanisms are not in force when it comes to the products of research or possible ethical implications of their use, misuse or unintended consequences of mass use (Stahl et al 2009, p. 7).

Of course, there are differences of what kind of ICT implant is used in what context and how it is connected to what part of the human body. While the research on and the development of such implants appears to be central to the vision of some technologies like Bio- und Neuroelectronics, they seem to play a less prominent role in other perspective like Ambient Intelligence.


We assume that all mentioned technologies may rise concerns about the protection of human dignity for instance in the case of ICT implants in the human body but they certainly do so in different degrees. The EGE ―considers that ICT implants are not per se a danger to human freedom or dignity but in the case of applications, which entail for instance the possibility of individual and/or group surveillance, the potential restriction of freedom must be carefully evaluated.

Since ―affective computing is closely linked with ―persuasive technologies, it tends to undermine the autonomy of the individuals affected.

Affective Computing may not only give rise to concerns with regards to ―evil dictatorships but also in democratic societies given the potentials of manipulation.

Informed consent: Persuasive technologies may become especially problematic if the persuasiveness of system is being used to archive ―informed consent (Nagenborg 2010).

The use of Affective Computing tools for specific purposes in specific contexts, especially in case of non-medical applications. Security and surveillance applications, especially if they aim at manipulating persons, might be considered to be similar to ICT implants.

In the ―Description of Technology it is stated that AmI application in healthcare might include ―computers … in your body [monitoring] your health status at all time

Freedom

Privacy: As has been pointed out in the ―Ethical Analysis, the issue of ―privacy has received the most attention in academic literature. … [T]he technology is perceived to have a clear potential to violate the privacy of the user(s). AmI systems may also become part of a larger ―surveillant assemblage (Haggerty and Ericson 2000) if AmI applications become interoperable with other (AmI)systems. For example, the use and exchange of biometric information in such systems is a critical issue because these may enable to track a person in otherwise distinct systems.

Therefore, the widespread use of AmI in society and particularly the interconnectivity and interoperationality of such systems have to be considered in the ranking. Informed Consent: Because AmI systems are designed to become ‚invisible and are likely to include machine-user-interfaces that are not perceived as such by the users, there is a tendency to undermine the idea of requesting consent of the users except in a very general form.

Consumer protection: AmI applications might be considered as tools for monitoring the environment including the detection of safety risks or security issues.

Read full document

Aiming To Learn As We Do, A Machine Teaches Itself

Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.

Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.

Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.


The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”

NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.


The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”

Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, MicrosoftI.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”

Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, whileNELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at theUniversity of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”

With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.

The New York Times

EU/USA The Human Brain Project

EU/USA The Human Brain Project

 

Goal

The goal of the Human Brain Project (HBP) is to build the informatics, modeling and supercomputing technologies needed to simulate and understand the human brain. Major expected advances include new tools to fight the growing impact of brain disease on public health and well-being, and a new class of technologies with brain-like intelligence, to empower people to make decisions in an increasingly complex information society.

More specifically, the HBP will:

1.  Establish a global multidisciplinary program to organize and informatical-ly analyze basic and clinical data about the brain and to model, simulate and understand animal and human brains at all levels of organization, from genes to cognition and behavior;

2.  Design and implement an exascale supercomputer, with the power and functionality to make these goals feasible including novel capabilities for real time model building, interactive simulation, visualization and data access; contribute to longer term prospects for braininspired supercomputing;

3.  Derive novel technologies, beginning with enhancements to current telecommunications, multimedia, internet, ambient intelligence, data storage, real-time data analysis, virtual reality and gaming systems, but leading toward completely new kinds of information processing and genuine intelligence for robots;

4.  Develop applications in medical and pharmacological research, including new diagnostic and disease monitoring tools, simulations of brain disease, and simulations of the effects and side effects of drugs.

Read full article here

Scientists Use Brain Imaging To Reveal The Movies In Our Mind

By Yasmin Anwar, Media Relations | September 22, 2011

Professor Jack Gallant discusses vision reconstruction research
Psychology and neuroscience professor Jack Gallant displays videos and brain images used in his research. Video produced by Roxanne Makasdjian, Media Relations.

BERKELEY — Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, scientists at the University of California, Berkeley, are bringing these futuristic scenarios within reach.

Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.

As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.

He approximate reconstruction (right) of a movie clip (left) is achieved through brain imaging and computer simulation

“This is a major leap toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online today (Sept. 22) in the journal Current Biology. “We are opening a window into the movies in our minds.”

Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases.

It may also lay the groundwork for brain-machine interface so that people with cerebral palsy or paralysis, for example, can guide computers with their minds.

However, researchers point out that the technology is decades from allowing users to read others’ thoughts and intentions, as portrayed in such sci-fi classics as “Brainstorm,” in which scientists recorded a person’s sensations so that others could experience them.

Previously, Gallant and fellow researchers recorded brain activity in the visual cortex while a subject viewed black-and-white photographs. They then built a computational model that enabled them to predict with overwhelming accuracy which picture the subject was looking at.

In their latest experiment, researchers say they have solved a much more difficult problem by actually decoding brain signals generated by moving pictures.

“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.” 

Mind-reading through brain imaging technology is a common sci-fi theme

Nishimoto and two other research team members served as subjects for the experiment, because the procedure requires volunteers to remain still inside the MRI scanner for hours at a time.

They watched two separate sets of Hollywood movie trailers, while fMRI was used to measure blood flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or “voxels.”

“We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity,” Nishimoto said.

The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.

Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.

Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.

Reconstructing movies using brain scans has been challenging because the blood flow signals measured using fMRI change much more slowly than the neural signals that encode dynamic information in movies, researchers said. For this reason, most previous attempts to decode brain activity have focused on static images.

“We addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals,” Nishimoto said.

Ultimately, Nishimoto said, scientists need to understand how the brain processes dynamic visual events that we experience in everyday life.

“We need to know how the brain works in naturalistic conditions,” he said. “For that, we need to first understand how the brain works while we are watching movies.”

Other coauthors of the study are Thomas Naselaris with UC Berkeley’s Helen Wills Neuroscience Institute; An T. Vu with UC Berkeley’s Joint Graduate Group in Bioengineering; and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.

RELATED INFORMATION

Bits of the Future: First Universal Quantum Network Prototype Links 2 Separate Labs

 Science News

Bits of the Future: First Universal Quantum Network Prototype Links 2 Separate Labs

Physicists demonstrate a scalable quantum network that ought to be adaptable for all manner of long-distance quantum communication

By John Matson  | April 11, 2012 | 9

Quantum network cartoonQUANTUM NETWORK: Networks based on single atoms, linked by the exchange of single photons, could form the basis of versatile quantum networks.Image: Andreas Neuzner/M.P.Q.

Quantum technologies are the way of the future, but will that future ever arrive?

Maybe so. Physicists have cleared a bit more of the path to a plausible quantum future by constructing an elementary network for exchanging and storing quantum information. The network features two all-purpose nodes that can send, receive and store quantum information, linked by a fiber-optic cable that carries it from one node to another on a single photon.

The network is only a prototype, but if it can be refined and scaled up, it could form the basis of communication channels for relaying quantum information. A group from the Max Planck Institute of Quantum Optics (M.P.Q.) in Garching, Germany, described the advance in the April 12 issue of Nature. (Scientific American is part of Nature Publishing Group.)

Quantum bits, or qubits, are at the heart of quantum information technologies. An ordinary, classical bit in everyday electronics can store one of two values: a 0 or a 1. But thanks to the indeterminacy inherent to quantum mechanics, a qubit can be in a so-called superposition, hovering undecided between 0 and 1, which adds a layer of complexity to the information it carries. Quantum computers would boast capabilities beyond the reach of even the most powerful classical supercomputers, and cryptography protocols based on the exchange of qubits would be more secure than traditional encryption methods.

Physicists have used all manner of quantum objects to store qubits—electrons, atomic nuclei, photons and so on. In the new demonstration, the qubit at each node of the network is stored in the internal quantum state of a single rubidium atom trapped in a reflective optical cavity. The atom can then transmit its stored information via an optical fiber by emitting a single photon, whose polarization state carries the mark of its parent atom’s quantum state; conversely, the atom can absorb a photon from the fiber and take on the quantum state imprinted on that photon’s polarization.

Because each node can perform a variety of functions—sending, receiving or storing quantum information—a network based on atoms in optical cavities could be scaled up simply by connecting more all-purpose nodes. “We try to build a system where the network node is universal,” says M.P.Q. physicist Stephan Ritter, one of the study’s authors. “It’s not only capable of sending or receiving—ideally, it would do all of the things you could imagine.” The individual pieces of such a system had been demonstrated—atoms sending quantum information on single emitted photons, say—but now the technologies are sufficiently advanced that they can work as an ensemble. “This has now all come together and enabled us to realize this elementary version of a quantum network,” Ritter says.

Physicists proposed using optical cavities for quantum networks 15 years ago, because they marry the best features of atomic qubits and photonic qubits—namely that atoms stay put, making them an ideal storage medium, whereas photons are speedy, making them an ideal message carrier between stationary nodes. But getting the photons and atoms to communicate with one another has been a challenge. “If you want to use single atoms and single photons, as we do, they hardly interact,” Ritter adds.

That is where the optical cavity comes in. The mirrors of the cavity reflect a photon past the rubidium atom tens of thousands of times, boosting the chances of an interaction. “During this time, there’s enough time to really do this information exchange in a reliable way,” Ritter says. “The cavity enhances the coupling between the light field and the atom.”

Quantum Memory for Communication

Quantum Memory for Communication Networks of the Future

Quantum Memory for Communication Networks of the Future

ScienceDaily (Nov. 8, 2010) — Researchers from the Niels Bohr Institute at the University of Copenhagen have succeeded in storing quantum information using two ‘entangled’ light beams. Quantum memory or information storage is a necessary element of future quantum communication networks. The new findings are published in Nature Physics.



Quantum networks will be able to protect the security of information better than the current conventional communication networks. The cornerstone of quantum communication is a phenomenon called entanglement between two quantum systems, for example, two light beams. Entanglement means that the two light beams are connected to each other, so that they have well defined common characteristics, a kind of common knowledge. A quantum state can — according to the laws of quantum mechanics, not be copied and can therefore be used to transfer data in a secure way.

In professor Eugene Polzik’s research group Quantop at the Niels Bohr Institute researchers have now been able to store the two entangled light beams in two quantum memories. The research is conducted in a laboratory where a forest of mirrors and optical elements such as wave plates, beam splitters, lenses etc. are set up on a large table, sending the light around on a more than 10 meter long labyrinthine journey. Using the optical elements, the researchers control the light and regulate the size and intensity to get just the right wavelength and polarisation the light needs to have for the experiment.

The two entangled light beams are created by sending a single blue light beam through a crystal where the blue light beam is split up into two red light beams. The two red light beams are entangled, so they have a common quantum state. The quantum state itself is information.

The two light beams are sent on through the labyrinth of mirrors and optical elements and reach the two memories, which in the experiment are two glass containers filled with a gas of caesium atoms. The atoms’ quantum state contains information in the form of a so-called spin, which can be either ‘up’ or ‘down’. It can be compared with computer data, which consists of the digits 0 and 1. When the light beams pass the atoms, the quantum state is transferred from the two light beams to the two memories. The information has thus been stored as the new quantum state in the atoms.

“For the first time such a memory has been demonstrated with a very high degree of reliability. In fact, it is so good that it is impossible to obtain with conventional memory for light that is used in, for example, internet communication. This result means that a quantum network is one step closer to being a reality,” explains professor Eugene Polzik.

IBM Attempts to Build Computer ‘Brain’

IBM Attempts to Build Computer ‘Brain’

by Brian Thomas, M.S. *

IBM researchers are working on a new computing device that could process massive data sets while using very little energy. It would also be able to quickly learn and remember patterns, which might make it able to “issue tsunami warnings in case of an earthquake” or calculate the likelihood of contaminated produce on grocers’ shelves.1

The inspiration for their prototype? The human brain.

The project—funded by IBM—is called SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics). Its principal investigator, Dharmendra Modha of the Defense Advanced Research Projects Agency, told the technology news organization VentureBeat:

The computers we have today are more like calculators. We want to make something like the brain. It is a sharp departure from the past.1Thus, “neurosynaptic computing chips” have been engineered as the first stage in building a more brain-like computer. Core differences separate even the most powerful modern computers from the enormously more powerful human brain. In today’s computers, memory devices are separate from processors, so the two are connected by a channel called a “bus.”

The size of the bus often determines the flow rate of information, and this can impact the computer’s processing capabilities. But processing and memory in brains, as far as researchers know, operate in the same place and time. This increased efficiency in internal connections has the side benefit of requiring much less energy. The article on VentureBeat’s website featured links to online videos released by IBM describing the project. In one, IBM researcher John Arthur remarked, “We can’t find anything better than the brain” at recognizing and remembering complex patterns.The brain consists of billions of neurons that connect with each other via trillions of synapses. Each neuron has vast numbers of individual proteins that act as computational switches, making the total computational power of the human brain literally astronomical. In fact, one recent study compared the number of brain synapses to the number of stars in 1,500 Milky Way galaxies, which could be more than 450 trillion.3 Modha’s estimate of 100 trillion synapses is at least within the same order of magnitude. But all of those brain synapses operate on very low energy, and huge sections of them “dial down” when not in use.

The SyNAPSE research group hopes that its machine will model the brain’s “structural and synaptic plasticity,” which would enable it to rewire itself according to what it learns.In the end, millions of dollars, thousands of man hours, and the countless painstaking efforts of intelligent engineers has resulted in two silicon-based processing chips, each consisting of only 65,536 electronic synapses. Apparently, the chips are able to play the first Atari video game, “Pong,” in which two simulated paddles bounce a dot back and forth across a computer screen. The next phase, requiring an unknown additional quantity of time, energy, and funding, should result in multiple chips wired together to make a prototype computer. Eventually, the researchers aim to build one “as powerful [as] the human brain.”Some of the best and brightest engineering brains are involved in seeing this project to completion. If and when they succeed, they will also have succeeded in proving that the human brain they used as their model could only have been created through intelligently and purposefully directed power. Something that intricately designed could never have “just happened.”

References:

  1. Takahashi, D. IBM produces first working chips modeled on the human brain. VentureBeat. Posted on venturebeat.com August 17, 2011, accessed August 24, 2011.
  2. SyNAPSE: IBM Cognitive Computing Project—Hardware. IBM Research Almaden. Posted on youtube.com April 19, 2011, accessed August 24, 2011.
  3. If the Milky Way galaxy has 300 million stars, then 1,500 of these galaxies would equal 450 trillion stars. See Thomas, B. Brain’s Complexity ‘Is Beyond Anything Imagined.’ ICR News. Posted on icr.org January 17, 2011, accessed August 24, 2011.

* Mr. Thomas is Science Writer at the Institute for Creation Research.

Post Navigation