Read e-book The theory of integration

Free download. Book file PDF easily for everyone and every device. You can download and read online The theory of integration file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The theory of integration book. Happy reading The theory of integration Bookeveryone. Download file Free Book PDF The theory of integration at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The theory of integration Pocket Guide.
Librarian's choice
Contents:


  1. Background
  2. Lebesgue integration
  3. Economic Integration Theories and the Developing Countries

For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition? This paper presents a theory about what consciousness is and how it can be measured.

According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation — the availability of a very large number of conscious experiences; and integration — the unity of each such experience.

The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex.

The information integration theory accounts, in a principled manner, for several neurobiological observations concerning consciousness. As shown here, these include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness.

The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts. Consciousness is everything we experience. Think of it as what abandons us every night when we fall into dreamless sleep and returns the next morning when we wake up [ 1 ]. Without consciousness, as far as we are concerned, there would be neither an external world nor our own selves: there would be nothing at all. To understand consciousness, two main problems need to be addressed. The first problem is to understand the conditions that determine to what extent a system has consciousness.

For example, why is it that certain parts of the brain are important for conscious experience, whereas others, equally rich in neurons and connections, are not? And why are we conscious during wakefulness or dreaming sleep, but much less so during dreamless sleep, even if the brain remains highly active? The second problem is to understand the conditions that determine what kind of consciousness a system has.


  • Economic Integration Theories and the Developing Countries - Munich Personal RePEc Archive!
  • Total Recovery: Solving the Mystery of Chronic Pain and Depression.
  • Classical and Modern Integration Theories!
  • Information Integration Theory.

For example, what determines the specific and seemingly irreducible quality of the different modalities e. Why do colors look the way they do, and different from the way music sounds, or pain feels?

Background

Solving the first problem means that we would know to what extent a physical system can generate consciousness — the quantity or level of consciousness. Solving the second problem means that we would know what kind of consciousness it generates — the quality or content of consciousness. We all know that our own consciousness waxes when we awaken and wanes when we fall asleep. We may also know first-hand that we can "lose consciousness" after receiving a blow on the head, or after taking certain drugs, such as general anesthetics.

Thus, everyday experience indicates that consciousness has a physical substrate, and that that physical substrate must be working in the proper way for us to be fully conscious. It also prompts us to ask, more generally, what may be the conditions that determine to what extent consciousness is present.

For example, are newborn babies conscious, and to what extent? Are animals conscious? If so, are some animals more conscious than others? And can they feel pain? Can a conscious artifact be constructed with non-neural ingredients? Is a person with akinetic mutism — awake with eyes open, but mute, immobile, and nearly unresponsive — conscious or not? And how much consciousness is there during sleepwalking or psychomotor seizures? It would seem that, to address these questions and obtain a genuine understanding of consciousness, empirical studies must be complemented by a theoretical analysis.

The theory presented here claims that consciousness has to do with the capacity to integrate information. This claim may not seem self-evident, perhaps because, being endowed with consciousness for most of our existence, we take it for granted. To gain some perspective, it is useful to resort to some thought experiments that illustrate key properties of subjective experience: its informativeness, its unity, and its spatio-temporal scale.

Consider the following thought experiment. You are facing a blank screen that is alternately on and off, and you have been instructed to say "light" when the screen turns on and "dark" when it turns off. A photodiode — a very simple light-sensitive device — has also been placed in front of the screen, and is set up to beep when the screen emits light and to stay silent when the screen does not.

The first problem of consciousness boils down to this. When you differentiate between the screen being on or off, you have the conscious experience of "seeing" light or dark. The photodiode can also differentiate between the screen being on or off, but presumably it does not consciously "see" light and dark. What is the key difference between you and the photodiode that makes you "see" light consciously? According to the theory, the key difference between you and the photodiode has to do with how much information is generated when that differentiation is made.

Information is classically defined as reduction of uncertainty among a number of alternatives outcomes when one of them occurs [ 4 ]. When the blank screen turns on, the photodiode enters one of its two possible alternative states and beeps. As with the coin, this corresponds to 1 bit of information. However, when you see the blank screen turn on, the state you enter, unlike the photodiode, is one out of an extraordinarily large number of possible states.

That is, the photodiode's repertoire is minimally differentiated, while yours is immensely so. It is not difficult to see this. For example, imagine that, instead of turning homogeneously on, the screen were to display at random every frame from every movie that was or could ever be produced. Without any effort, each of these frames would cause you to enter a different state and "see" a different image.

This means that when you enter the particular state "seeing light" you rule out not just "dark", but an extraordinarily large number of alternative possibilities. Whether you think or not of the bewildering number of alternatives and you typically don't , this corresponds to an extraordinary amount of information see Appendix, ii. This point is so simple that its importance has been overlooked.

While the ability to differentiate among a very large number of states is a major difference between you and the lowly photodiode, by itself it is not enough to account for the presence of conscious experience. To see why, consider an idealized one megapixel digital camera, whose sensor chip is essentially a collection of one million photodiodes. Even if each photodiode in the sensor chip were just binary, the camera as such could differentiate among 2 1,, states, an immense number, corresponding to 1,, bits of information. Indeed, the camera would easily enter a different state for every frame from every movie that was or could ever be produced.

Yet nobody would believe that the camera is conscious. What is the key difference between you and the camera?

urozuhecihiv.tk/map10.php

Lebesgue integration

According to the theory, the key difference between you and the camera has to do with information integration. From the perspective of an external observer, the camera chip can certainly enter a very large number of different states, as could easily be demonstrated by presenting it with all possible input signals. However, the sensor chip can be considered just as well as a collection of one million photodiodes with a repertoire of two states each, rather than as a single integrated system with a repertoire of 2 1,, states.

This is because, due to the absence of interactions among the photodiodes within the sensory chip, the state of each element is causally independent of that of the other elements, and no information can be integrated among them. Indeed, if the sensor chip were literally cut down into its individual photodiodes, the performance of the camera would not change at all. By contrast, the repertoire of states available to you cannot be subdivided into the repertoire of states available to independent components.

This is because, due to the multitude of causal interactions among the elements of your brain, the state of each element is causally dependent on that of other elements, which is why information can be integrated among them. Indeed, unlike disconnecting the photodiodes in a camera sensor, disconnecting the elements of your brain that underlie consciousness has disastrous effects. The integration of information in conscious experience is evident phenomenologically: when you consciously "see" a certain image, that image is experienced as an integrated whole and cannot be subdivided into component images that are experienced independently.

For example, no matter how hard you try, for example, you cannot experience colors independent of shapes, or the left half of the visual field of view independently of the right half. And indeed, the only way to do so is to physically split the brain in two to prevent information integration between the two hemispheres.

But then, such split-brain operations yield two separate subjects of conscious experience, each of them having a smaller repertoire of available states and more limited performance [ 5 ]. Finally, it is important to appreciate that conscious experience unfolds at a characteristic spatio-temporal scale.

For instance, it flows in time at a characteristic speed and cannot be much faster or much slower. No matter how hard you try, you cannot speed up experience to follow a move accelerated a hundred times, not can you slow it down if the movie has decelerated.

Studies of how a percept is progressively specified and stabilized — a process called microgenesis — indicate that it takes up to — milliseconds to develop a fully formed sensory experience, and that the surfacing of a conscious thought may take even longer [ 6 ]. In fact, the emergence of a visual percept is somewhat similar to the developing of a photographic print: first there is just the awareness that something has changed, then that it is something visual rather than, say, auditory, later some elementary features become apparent, such as motion, localization, and rough size, then colors and shapes emerge, followed by the formation of a full object and its recognition — a sequence that clearly goes from less to more differentiated [ 6 ].

Other evidence indicates that a single conscious moment does not extend beyond 2—3 seconds [ 7 ]. While it is arguable whether conscious experience unfolds more akin to a series of discrete snapshots or to a continuous flow, its time scale is certainly comprised between these lower and upper limits. Thus, a phenomenological analysis indicates that consciousness has to do with the ability to integrate a large amount of information, and that such integration occurs at a characteristic spatio-temporal scale.

If consciousness corresponds to the capacity to integrate information, then a physical system should be able to generate consciousness to the extent that it has a large repertoire of available states information , yet it cannot be decomposed into a collection of causally independent subsystems integration. How can one identify such an integrated system, and how can one measure its repertoire of available states [ 2 , 8 ]? As was mentioned above, to measure the repertoire of states that are available to a system, one can use the entropy function, but this way of measuring information is completely insensitive to whether the information is integrated.

Thus, measuring entropy would not allow us to distinguish between one million photodiodes with a repertoire of two states each, and a single integrated system with a repertoire of 2 1,, states. To measure information integration, it is essential to know whether a set of elements constitute a causally integrated system, or they can be broken down into a number of independent or quasi-independent subsets among which no information can be integrated.

To see how one can achieve this goal, consider an extremely simplified system constituted of a set of elements. To make matters slightly more concrete, assume that we are dealing with a neural system. Each element could represent, for instance, a group of locally interconnected neurons that share inputs and outputs, such as a cortical minicolumn.

Assume further that each element can go through discrete activity states, corresponding to different firing levels, each of which lasts for a few hundred milliseconds. Finally, for the present purposes, let us imagine that the system is disconnected from external inputs, just as the brain is virtually disconnected from the environment when it is dreaming. Consider now a subset S of elements taken from such a system, and the diagram of causal interactions among them Fig. We want to measure the information generated when S enters a particular state out of its repertoire, but only to the extent that such information can be integrated, i.

How can one do so? One way is to divide S into two complementary parts A and B, and evaluate the responses of B that can be caused by all possible inputs originating from A. In neural terms, we try out all possible combinations of firing patterns as outputs from A, and establish how differentiated is the repertoire of firing patterns they produce in B. In information-theoretical terms, we give maximum entropy to the outputs from A A Hmax , i. Note that since A is substituted by independent noise sources, there are no causal effects of B on A; therefore the entropy shared by B and A is necessarily due to causal effects of A on B.

Effective information, minimum information bipartition, and complexes. Effective information. Arrows indicate causally effective connections linking A to B and B to A across the bipartition other connections may link both A and B to the rest of the system X. The entropy of the states of B that is due to the input from A is then measured. Note that A can affect B directly through connections linking the two subsets, as well as indirectly via X.

Minimum information bipartition. Analysis of complexes. Methodological note. If the system's dynamics corresponds to a multivariate Gaussian random process, its covariance matrix COV X can be derived analytically. As in previous work, we consider the vector X of random variables that represents the activity of the elements of X, subject to independent Gaussian noise R of magnitude c.

Under Gaussian assumptions, all deviations from independence among the two complementary parts A and B of a subset S of X are expressed by the covariances among the respective elements. Note that MI A:B is symmetric and positive. To obtain the effective information between A and B within model systems, independent noise sources in A are enforced by setting to zero strength the connections within A and afferent to A.

Then the covariance matrix for A is equal to the identity matrix given independent Gaussian noise , and any statistical dependence between A and B must be due to the causal effects of A on B, mediated by the efferent connections of A. Moreover, all possible outputs from A that could affect B are evaluated. The independent Gaussian noise R applied to A is multiplied by c p , the perturbation coefficient, while the independent Gaussian noise applied to the rest of the system is given by c i , the intrinsic noise coefficient.

Based on the notion of effective information for a bipartition, we can assess how much information can be integrated within a system of elements. To this end, we note that a subset S of elements cannot integrate any information as a subset if there is a way to partition S in two parts A and B such that EI A. In such a case, in fact, we would clearly be dealing with at least two causally independent subsets, rather than with a single, integrated subset.

This is exactly what would happen with the photodiodes making up the sensor of a digital camera: perturbing the state of some of the photodiodes would make no difference to the state of the others. Similarly, a subset can integrate little information if there is a way to partition it in two parts A and B such that EI A B is low: the effective information across that bipartition is the limiting factor on the subset's information integration capacity. Therefore in order to measure the information integration capacity of a subset S, we should search for the bipartition s of S for which EI A B reaches a minimum the informational "weakest link".

We are now in a position to establish which subsets are actually capable of integrating information, and how much of it Fig. What we are left with are complexes — individual entities that can integrate information. Some properties of complexes worth pointing out are, for instance, that a complex can be causally connected to elements that are not part of it the input and output elements of a complex are called ports-in and ports-out , respectively. Also, the same element can belong to more than one complex, and complexes can overlap. To the extent that consciousness corresponds to the capacity to integrate information, complexes are the "subjects" of experience, being the locus where information can be integrated.

Since information can only be integrated within a complex and not outside its boundaries, consciousness as information integration is necessarily subjective, private, and related to a single point of view or perspective [ 1 , 9 ]. It follows that elements that are part of a complex contribute to its conscious experience, while elements that are not part of it do not, even though they may be connected to it and exchange information with it through ports-in and ports-out. In the brain, for example, synchronous firing of heavily interconnected groups of neurons sharing inputs and outputs, such as cortical minicolumns, may produce significant effects in the rest of the brain, while asynchronous firing of various combinations of individual neurons may be less effective.

Indeed, a neural system will soon settle down into states that become progressively more independent of the stimulation. To recapitulate, the theory claims that consciousness corresponds to the capacity to integrate information. Even if we were reasonably sure that a system is conscious, it is not immediately obvious what kind of consciousness it would have. As was mentioned early on, our own consciousness comes in specific and seemingly irreducible qualities, exemplified by different modalities e.

What determines that colors look the way they do, and different from the way music sounds, or pain feels? And why can we not even imagine what a "sixth" sense would feel like? Or consider the conscious experience of others. Does a gifted musician experience the sound of an orchestra the same way you do, or is his experience richer? And what about bats [ 10 ]? Assuming that they are conscious, how do they experience the world they sense through echolocation? Is their experience of the world vision-like, audition-like, or completely alien to us?

Unless we accept that the kind of consciousness a system has is arbitrary, there must be some necessary and sufficient conditions that determine exactly what kind of experiences it can have. This is the second problem of consciousness. While it may not be obvious how best to address this problem, we do know that, just as the quantity of our consciousness depends on the proper functioning of a physical substrate — the brain, so does the quality of consciousness. Consider for example the acquisition of new discriminatory abilities, such as becoming expert at wine tasting.

Careful studies have shown that we do not learn to distinguish among a large number of different wines merely by attaching the appropriate labels to different sensations that we had had all along. Rather, it seems that we actually enlarge and refine the set of sensations triggered by tasting wines. Similar observations have been made by people who, for professional reasons, learn to discriminate among perfumes, colors, sounds, tactile sensations, and so on.

Or consider perceptual learning during development. While infants experience more than just a "buzzing confusion", there is no doubt that perceptual abilities undergo considerable refinement — just consider what your favorite red wine must have tasted like when all you had experienced was milk and water.

These examples indicate that the quality and repertoire of our conscious experience can change as a result of learning. What matters here is that such perceptual learning depends upon specific changes in the physical substrate of our consciousness — notably a refinement and rearranging of connections patterns among neurons in appropriate parts of the thalamocortical system e.

Further evidence for a strict association between the quality of conscious experience and brain organization comes from countless neurological studies. Thus, we know that damage to certain parts of the cerebral cortex forever eliminates our ability to perceive visual motion, while leaving the rest of our consciousness seemingly intact.

By contrast, damage to other parts selectively eliminates our ability to perceive colors. There is obviously something about the organization of those cortical areas that makes them contribute different qualities — visual motion and color — to conscious experience. In this regard, it is especially important that the same cortical lesion that eliminates the ability to perceive color or motion also eliminates the ability to remember, imagine, and dream in color or motion. By contrast, lesions of the retina, while making us blind, do not prevent us from remembering, imagining, and dreaming in color unless they are congenital.

Thus, it is something having to do with the organization of certain cortical areas — and not with their inputs from the sensory periphery — that determines the quality of conscious experiences we can have. What is this something? According to the theory, just as the quantity of consciousness associated with a complex is determined by the amount of information that can be integrated among its elements, the quality of its consciousness is determined by the informational relationships that causally link its elements [ 13 ].

That is, the way information can be integrated within a complex determines not only how much consciousness is has, but also what kind of consciousness. More precisely, the theory claims that the elements of a complex constitute the dimensions of an abstract relational space, the qualia space. The values of effective information among the elements of a complex, by defining the relationships among these dimensions, specify the structure of this space in a simplified, Cartesian analogue, each element is a Cartesian axis, and the effective information values between elements define the angles between the axes, see Appendix, v.

This relational space is sufficient to specify the quality of conscious experience. Thus, the reason why certain cortical areas contribute to conscious experience of color and other parts to that of visual motion has to do with differences in the informational relationships both within each area and between each area and the rest of the main complex.

By contrast, the informational relationships that exist outside the main complex — including those involving sensory afferents — do not contribute either to the quantity or to the quality of consciousness. To exemplify, consider two very simple linear systems of four elements each Fig. The system on the left is organized as a divergent digraph: element number 1 sends connections of equal strength to the other three elements.

The system on the right is organized as a chain: element number 1 is connected to 2, which is connected to 3, which is connected to 4. This contains the values of EI between each subset of elements and every other subset, corresponding to all informational relationships among the elements the first row shows the values in one direction, the second row in the reciprocal direction. The elements themselves define the dimensions of the qualia space of each complex, in this case four.

The effective information matrix defines the relational structure of the space. This can be thought of as a kind of topology, in that the entries in the matrix can be considered to represent how close such dimensions are to each other see Appendix, vi. Causal interactions diagram and analysis of complexes. Shown are two systems, one with a "divergent" architecture left and one with a "chain" architecture right. Effective information matrix. Shown is the effective information matrix for the two complexes above. For each complex, all bipartitions are indicated by listing one part subset A on the upper row and the complementary part subset B on the lower row.

In between are the values of effective information from A to B and from B to A for each bipartition, color-coded as black zero , red intermediate value and yellow high value.

The effective information matrix defines the set of informational relationships, or "qualia space" for each complex. Note that the effective information matrix refers exclusively to the informational relationships within the main complex relationships with elements outside the main complex, represented here by empty circles, do not contribute to qualia space.

State diagram. Shown are five representative states for the two complexes. Each is represented by the activity state of the four elements of each complex arranged in a column blue: active elements; black: inactive ones. The five states can be thought of, for instance, as evolving in time due the intrinsic dynamics of the system or to inputs from the environment. Although the states are identical for the two complexes, their meaning is different because of the difference in the effective information matrix. The last four columns represent four special states, those corresponding to the activation of one element at a time.

Such states, if achievable, would correspond most closely to the specific "quale" contributed by that particular element in that particular complex. Nevertheless, it is a central claim of the theory that the structure of phenomenological relationships should reflect directly that of informational relationships.

For example, the conscious experiences of blue and red appear irreducible red is not simply less of blue. They may therefore correspond to different dimensions of qualia space different elements of the complex. We also know that, as different as blue and red may be subjectively, they are much closer to each other than they are, say, to the blaring of a trumpet.

EI values between the neuronal groups underlying the respective dimensions should behave accordingly, being higher between visual elements than between visual and auditory elements. As to the specific quality of different modalities and submodalities, the theory predicts that they are due to differences in the set of informational relationships within the respective cortical areas and between each area and the rest of the main complex. For example, areas that are organized topographically and areas that are organized according to a "winner takes all" arrangement should contribute different kinds of experiences.

Another prediction is that changes in the quality and repertoire of sensations as a result of perceptual learning would also correspond to a refinement of the informational relationships within and between the appropriate cortical areas belonging to the main complex. By contrast, the theory predicts that informational relationships outside a complex — including those among sensory afferents — should not contribute directly to the quality of conscious experience of that complex.

Of course, sensory afferents, sensory organs, and ultimately the nature and statistics of external stimuli, play an essential role in shaping the informational relationships among the elements of the main complex — but such role is an indirect and historical one — played out through evolution, development, and learning [ 14 ] see Appendix, vii. According to the theory, once the quantity and quality of conscious experience that a complex can have are specified, the particular conscious state or experience that the complex will have at any given time is specified by the activity state of its elements at that time in a Cartesian analogue, if each element of the complex corresponds to an axis of qualia space, and effective information values between elements define the angles between the axes specifying the structure of the space, then the activity state of each element provides a coordinate along its axis, and each conscious state is defined by the set of all its coordinates.

The relevant activity variables are those that mediate the informational relationships among the elements, that is, those that mediate effective information. For example, if the elements are local groups of neurons, then the relevant variables are their firing patterns over tens to hundreds of milliseconds. The state of a complex at different times can be represented schematically by a state diagram as in Fig. Each column in the state diagram shows the activity values of all elements of a complex here between 0 and 1. Different conscious states correspond to different patterns of activity distributed over all the elements of a complex, with no contribution from elements outside the complex.

Each conscious state can thus be thought of as a different point in the multidimensional qualia space defined by the effective information matrix of a complex see Appendix, viii. Therefore, a succession or flow of conscious states over time can be thought of as a trajectory of points in qualia space. The state diagram also illustrates some states that have particular significance second to fifth column.

These are the states with just one active element, and all other elements silent or active at some baseline level. To the extent that this is possible, such highly selective states would represent the closest approximation to experiencing that element's specific contribution to consciousness — its quality or "quale". However, because of the differences in the qualia space between the two complexes, the same state over the four elements would correspond to different experiences and mean different things for the two complexes.

It should also be emphasized that, in every case, it is the activity state of all elements of the complex that defines a given conscious state, and both active and inactive elements count. To recapitulate, the theory claims that the quality of consciousness associated with a complex is determined by its effective information matrix. The effective information matrix specifies all informational relationships among the elements of a complex. The values of the variables mediating informational interactions among the elements of a complex specify the particular conscious experience at any given time.

Based on a phenomenological analysis, we have argued that consciousness corresponds to the capacity to integrate information. We have then considered how such capacity can be measured, and we have developed a theoretical framework for consciousness as information integration. We will now consider several neuroanatomical or neurophysiological factors that are known to influence consciousness. After briefly discussing the empirical evidence, we will use simplified computer models to illustrate how these neuroanatomical and neurophysiological factors influence information integration.

As we shall see, the information integration theory not only fits empirical observations reasonably well, but offers a principled explanation for them. Ancient Greek philosophers disputed whether the seat of consciousness was in the lungs, in the heart, or in the brain. The brain's pre-eminence is now undisputed, and scientists are trying to establish which specific parts of the brain are important.

For example, it is well established that the spinal cord is not essential for our conscious experience, as paraplegic individuals with high spinal transactions are fully conscious. Conversely, a well-functioning thalamocortical system is essential for consciousness [ 15 ]. Opinions differ, however, about the contribution of certain cortical areas [ 1 , 16 — 21 ]. Studies of comatose or vegetative patients indicate that a global loss of consciousness is usually caused by lesions that impair multiple sectors of the thalamocortical system, or at least their ability to work together as a system.

By contrast, selective lesions of individual thalamocortical areas impair different submodalities of conscious experience, such as the perception of color or of faces [ 25 ]. Electrophysiological and imaging studies also indicate that neural activity that correlates with conscious experience is widely distributed over the cortex e. It would seem, therefore, that the neural substrate of consciousness is a distributed thalamocortical network, and that there is no single cortical area where it all comes together see Appendix, ix.

The fact that consciousness as we know it is generated by the thalamocortical system fits well with the information integration theory, since what we know about its organization appears ideally suited to the integration of information. On the information side, the thalamocortical system comprises a large number of elements that are functionally specialized, becoming activated in different circumstances. Thus, the cerebral cortex is subdivided into systems dealing with different functions, such as vision, audition, motor control, planning, and many others.

Each system in turn is subdivided into specialized areas, for example different visual areas are activated by shape, color, and motion. Within an area, different groups of neurons are further specialized, e. On the integration side, the specialized elements of the thalamocortical system are linked by an extended network of intra- and inter-areal connections that permit rapid and effective interactions within and between areas [ 31 — 35 ].

In this way, thalamocortical neuronal groups are kept ready to respond, at multiple spatial and temporal scales, to activity changes in nearby and distant thalamocortical areas. As suggested by the regular finding of neurons showing multimodal responses that change depending on the context [ 36 , 37 ], the capacity of the thalamocortical system to integrate information is probably greatly enhanced by nonlinear switching mechanisms, such as gain modulation or synchronization, that can modify mappings between brain areas dynamically [ 34 , 38 — 40 ]. In summary, the thalamocortical system is organized in a way that appears to emphasize at once both functional specialization and functional integration.

As shown by computer simulations, systems of neural elements whose connectivity jointly satisfies the requirements for functional specialization and for functional integration are well suited to integrating information. First, connection patterns are different for different elements, ensuring functional specialization. Second, all elements can be reached from all other elements of the network, ensuring functional integration.

In the thalamocortical system, reciprocal connections linking topographically organized areas may be especially effective with respect to information integration. Information integration for a thalamocortical-like architecture. Optimization of information integration for a system that is both functionally specialized and functionally integrated. Note the heterogeneous arrangement of the incoming and outgoing connections: each element is connected to a different subset of elements, with different weights. Further analysis indicates that this network jointly maximizes functional specialization and functional integration among its 8 elements, thereby resembling the anatomical organization of the thalamocortical system [8].

Reduction of information integration through loss of specialization. Reduction of information integration through loss of integration. Consider now the cerebellum. This brain region contains more neurons than the cerebral cortex, has huge numbers of synapses, and receives mapped inputs from the environment and controls several outputs. However, in striking contrast to the thalamocortical system, lesions or ablations indicate that the direct contribution of the cerebellum to conscious experience is minimal.

Economic Integration Theories and the Developing Countries

Why is this the case? According to the theory, the reason lies with the organization of cerebellar connections, which is radically different from that of the thalamocortical system and is not well suited to information integration. Specifically, the organization of the connections is such that individual patches of cerebellar cortex tend to be activated independently of one another, with little interaction possible between distant patches [ 41 , 42 ].

Such an organization seems to be highly suited for both the learning and the rapid, effortless execution of informationally insulated subroutines. This concept is illustrated in Fig. According to the information integration theory, this is the reason why these systems, although computationally very sophisticated, contribute little to consciousness. It is also the reason why there is no conscious experience associated with hypothalamic and brainstem circuits that regulate important physiological variables, such as blood pressure.

Information integration and complexes for other neural-like architectures. Schematic of a cerebellum-like organization. Shown are three modules of eight elements each, with many feed forward and lateral connections within each module but minimal connections among them. Schematic of the organization of a reticular activating system. Schematic of the organization of afferent pathways. Shown are three short chains that stand for afferent pathways. Thus, elements in afferent pathways can affect the main complex without belonging to it. Schematic of the organization of efferent pathways.

Shown are three short chains that stand for efferent pathways. Each chain receives a connection from a port-out of the thalamocortical-like main complex. Schematic of the organization of cortico-subcortico-cortical loops. Shown are three short chains that stand for cortico-subcortico-cortical loops, which are connected to the main complex at both ports-in and ports-out. Thus, elements in loops connected to the main complex can affect it without belonging to it. It has been known for a long time that lesions in the reticular formation of the brainstem can produce unconsciousness and coma.

Conversely, stimulating the reticular formation can arouse a comatose animal and activate the thalamocortical system, making it ready to respond to stimuli [ 43 ]. Groups of neurons within the reticular formation are characterized by diffuse projections to many areas of the brain. Many such groups release neuromodulators such as acetylcholine, histamine, noradrenaline, serotonin, dopamine, and glutamate acting on metabotropic receptors and can have extremely widespread effects on both neural excitability and plasticity [ 44 ].

However, it would seem that the reticular formation, while necessary for the normal functioning of the thalamocortical system and therefore for the occurrence of conscious experience, may not contribute much in terms of specific dimensions of consciousness — it may work mostly like an external on-switch or as a transient booster of thalamocortical firing.

Such a role can be explained readily in terms of information integration. As shown in Fig. What we see usually depends on the activity patterns that occur in the retina and that are relayed to the brain. However, many observations suggest that retinal activity does not contribute directly to conscious experience. Retinal cells surely can tell light from dark and convey that information to visual cortex, but their rapidly shifting firing patterns do not correspond well with what we perceive. For example, during blinks and eye movements retinal activity changes dramatically, but visual perception does not.

The retina has a blind spot at the exit of the optic nerve where there are no photoreceptors, and it has low spatial resolution and no color sensitivity at the periphery of the visual field, but we are not aware of any of this. More importantly, lesioning the retina does not prevent conscious visual experiences.

For example, a person who becomes retinally blind as an adult continues to have vivid visual images and dreams. Conversely, stimulating the retina during sleep by keeping the eyes open and presenting various visual inputs does not yield any visual experience and does not affect visual dreams. Why is it that retinal activity usually determines what we see through its action on thalamocortical circuits, but does not contribute directly to conscious experience? Thus, input pathways providing powerful inputs to a complex add nothing to the information it integrates if their effects are entirely accounted for by ports-in.

In neurological practice, as well as in everyday life, we tend to associate consciousness with the presence of a diverse behavioral repertoire. For example, if we ask a lot of different questions and for each of them we obtain an appropriate answer, we generally infer that a person is conscious. Such a criterion is not unreasonable in terms of information integration, given that a wide behavioral repertoire is usually indicative of a large repertoire of internal states that are available to an integrated system.

However, it appears that neural activity in motor pathways, which is necessary to bring about such diverse behavioral responses, does not in itself contribute to consciousness. For example, patients with the locked-in syndrome, who are completely paralyzed except for the ability to gaze upwards, are fully conscious. Similarly, while we are completely paralyzed during dreams, consciousness is not impaired by the absence of behavior.

Even lesions of central motor areas do not impair consciousness. Why is it that neurons in motor pathways, which can produce a large repertoire of different outputs and thereby relay a large amount of information about different conscious states, do not contribute directly to consciousness? Another set of neural structures that may not contribute directly to conscious experience are subcortical structures such as the basal ganglia. The basal ganglia are large nuclei that contain many circuits arranged in parallel, some implicated in motor and oculomotor control, others, such as the dorsolateral prefrontal circuit, in cognitive functions, and others, such as the lateral orbitofrontal and anterior cingulate circuits, in social behavior, motivation, and emotion [ 45 ].

Each basal ganglia circuit originates in layer V of the cortex, and through a last step in the thalamus, returns to the cortex, not far from where the circuit started [ 46 ]. Similarly arranged cortico-ponto-cerebello-thalamo-cortical loops also exist. Why is it that these complicated neural structures, which are tightly connected to the thalamocortical system at both ends, do not seem to provide much direct contribution to conscious experience?

Instead, the elements of the main complex and of the connected cycles form a joint complex that can only integrate the limited amount of information exchanged within each cycle. This book approaches the issue of immigrant integration as a democratic justice problem. The study criticizes the current political obsession to restore the social cohesion of the host society in the face of immigration. It argues that this perception inhibits host societies from recognizing their immigrants as individuals who have authentic skills, qualifications and identities in addition to their ethnic, cultural and religious attachments.

She gained her Ph. She is currently working on research projects on the integration of Syrian Refugees in Turkey. JavaScript is currently disabled, this site works much better if you enable JavaScript in your browser. Publishing With Us. Book Authors Journal Authors. International Political Theory Free Preview. Uniquely compares immigration policies and integration in Canada and Turkey Offers a counterargument to the growing trend for restrictive immigration policies Uses Axel Honneth's theory of recognition to consider policies surrounding the integration of immigrants see more benefits.

Buy eBook.