Computer Brain

Computing your Brain,,,

Archive for the category “Scientists”

ETICA European Commission – Ethical Evaluation – Seventh Framework Prgramme

EU Etica – Ethical Evaluation

 Evaluation

Ethics in Science and New Technologies

Because ICT implants in the human body go along with the tendency to commercialize the human body and treat humans as objects or as biomechanical platform, implants are considered as a potential threat to human dignity in some contexts.

This document deals with the evaluation of the ―ethical analysis carried out in WP 2 ―with the aid of the overview of computer and information ethics and biometrical analysis.

Our approach is based on official documents on the European level as suggested in the ―Description of Work allowing a comparison between the ethical issues addressed in academic research and the issues likely to be addressed at the level of the European Union.

Among these core values of European institutions we highlight for instance: human dignity, freedom (which includes autonomy, responsibility, persuasion and coercion, informed consent), freedom of research, privacy, justice (which includes: autonomy, consumer protection, cultural diversity, environmental protection, safety, ownership, social inclusion). We also take into consideration the principle of proportionality, the precautionary principle and the principle of transparency as key principles of an ―Ethics of European Institutions.


Since the focus of the ETICA project is on research founded within the FP7 programme, one may assume that no such conflicts could be identified. However, conflicts may only arise in certain areas of applications, or while issues may arise they may not be regarded as serious enough to exclude the respective research. Also, it has to be noted that Ethics in FP7 concentrates on the research process. Control mechanisms are not in force when it comes to the products of research or possible ethical implications of their use, misuse or unintended consequences of mass use (Stahl et al 2009, p. 7).

Of course, there are differences of what kind of ICT implant is used in what context and how it is connected to what part of the human body. While the research on and the development of such implants appears to be central to the vision of some technologies like Bio- und Neuroelectronics, they seem to play a less prominent role in other perspective like Ambient Intelligence.


We assume that all mentioned technologies may rise concerns about the protection of human dignity for instance in the case of ICT implants in the human body but they certainly do so in different degrees. The EGE ―considers that ICT implants are not per se a danger to human freedom or dignity but in the case of applications, which entail for instance the possibility of individual and/or group surveillance, the potential restriction of freedom must be carefully evaluated.

Since ―affective computing is closely linked with ―persuasive technologies, it tends to undermine the autonomy of the individuals affected.

Affective Computing may not only give rise to concerns with regards to ―evil dictatorships but also in democratic societies given the potentials of manipulation.

Informed consent: Persuasive technologies may become especially problematic if the persuasiveness of system is being used to archive ―informed consent (Nagenborg 2010).

The use of Affective Computing tools for specific purposes in specific contexts, especially in case of non-medical applications. Security and surveillance applications, especially if they aim at manipulating persons, might be considered to be similar to ICT implants.

In the ―Description of Technology it is stated that AmI application in healthcare might include ―computers … in your body [monitoring] your health status at all time

Freedom

Privacy: As has been pointed out in the ―Ethical Analysis, the issue of ―privacy has received the most attention in academic literature. … [T]he technology is perceived to have a clear potential to violate the privacy of the user(s). AmI systems may also become part of a larger ―surveillant assemblage (Haggerty and Ericson 2000) if AmI applications become interoperable with other (AmI)systems. For example, the use and exchange of biometric information in such systems is a critical issue because these may enable to track a person in otherwise distinct systems.

Therefore, the widespread use of AmI in society and particularly the interconnectivity and interoperationality of such systems have to be considered in the ranking. Informed Consent: Because AmI systems are designed to become ‚invisible and are likely to include machine-user-interfaces that are not perceived as such by the users, there is a tendency to undermine the idea of requesting consent of the users except in a very general form.

Consumer protection: AmI applications might be considered as tools for monitoring the environment including the detection of safety risks or security issues.

Read full document

Aiming To Learn As We Do, A Machine Teaches Itself

Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.

Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.

Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.


The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”

NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.


The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”

Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, MicrosoftI.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”

Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, whileNELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at theUniversity of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”

With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.

The New York Times

EU/USA The Human Brain Project

EU/USA The Human Brain Project

 

Goal

The goal of the Human Brain Project (HBP) is to build the informatics, modeling and supercomputing technologies needed to simulate and understand the human brain. Major expected advances include new tools to fight the growing impact of brain disease on public health and well-being, and a new class of technologies with brain-like intelligence, to empower people to make decisions in an increasingly complex information society.

More specifically, the HBP will:

1.  Establish a global multidisciplinary program to organize and informatical-ly analyze basic and clinical data about the brain and to model, simulate and understand animal and human brains at all levels of organization, from genes to cognition and behavior;

2.  Design and implement an exascale supercomputer, with the power and functionality to make these goals feasible including novel capabilities for real time model building, interactive simulation, visualization and data access; contribute to longer term prospects for braininspired supercomputing;

3.  Derive novel technologies, beginning with enhancements to current telecommunications, multimedia, internet, ambient intelligence, data storage, real-time data analysis, virtual reality and gaming systems, but leading toward completely new kinds of information processing and genuine intelligence for robots;

4.  Develop applications in medical and pharmacological research, including new diagnostic and disease monitoring tools, simulations of brain disease, and simulations of the effects and side effects of drugs.

Read full article here

Scientists Use Brain Imaging To Reveal The Movies In Our Mind

By Yasmin Anwar, Media Relations | September 22, 2011

Professor Jack Gallant discusses vision reconstruction research
Psychology and neuroscience professor Jack Gallant displays videos and brain images used in his research. Video produced by Roxanne Makasdjian, Media Relations.

BERKELEY — Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, scientists at the University of California, Berkeley, are bringing these futuristic scenarios within reach.

Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.

As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.

He approximate reconstruction (right) of a movie clip (left) is achieved through brain imaging and computer simulation

“This is a major leap toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online today (Sept. 22) in the journal Current Biology. “We are opening a window into the movies in our minds.”

Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases.

It may also lay the groundwork for brain-machine interface so that people with cerebral palsy or paralysis, for example, can guide computers with their minds.

However, researchers point out that the technology is decades from allowing users to read others’ thoughts and intentions, as portrayed in such sci-fi classics as “Brainstorm,” in which scientists recorded a person’s sensations so that others could experience them.

Previously, Gallant and fellow researchers recorded brain activity in the visual cortex while a subject viewed black-and-white photographs. They then built a computational model that enabled them to predict with overwhelming accuracy which picture the subject was looking at.

In their latest experiment, researchers say they have solved a much more difficult problem by actually decoding brain signals generated by moving pictures.

“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.” 

Mind-reading through brain imaging technology is a common sci-fi theme

Nishimoto and two other research team members served as subjects for the experiment, because the procedure requires volunteers to remain still inside the MRI scanner for hours at a time.

They watched two separate sets of Hollywood movie trailers, while fMRI was used to measure blood flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or “voxels.”

“We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity,” Nishimoto said.

The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.

Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.

Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.

Reconstructing movies using brain scans has been challenging because the blood flow signals measured using fMRI change much more slowly than the neural signals that encode dynamic information in movies, researchers said. For this reason, most previous attempts to decode brain activity have focused on static images.

“We addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals,” Nishimoto said.

Ultimately, Nishimoto said, scientists need to understand how the brain processes dynamic visual events that we experience in everyday life.

“We need to know how the brain works in naturalistic conditions,” he said. “For that, we need to first understand how the brain works while we are watching movies.”

Other coauthors of the study are Thomas Naselaris with UC Berkeley’s Helen Wills Neuroscience Institute; An T. Vu with UC Berkeley’s Joint Graduate Group in Bioengineering; and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.

RELATED INFORMATION

Post Navigation