Exploring augmented collective intelligence
This post presents the starting point for my research on this field. It is just a first approach to the topic, open to comments and suggestions.
I. Introduction
Technology is changing the world. Mobile, cloud and big data technologies have profoundly changed the world over the last 20 years creating new services such as social media, e-commerce or peer-to-peer platforms that have very rapidly scaled globally.
Emerging technologies such as artificial intelligence, blockchain and quantum computing will cause the same effect in the years to come. And although technology has always been an agent of change in human history, the pace at which technology is developing and scaling today has become exponential. This is making the impact of technology bigger and broader, and the adaptation to it a real challenge for both individuals and organizations.
In this context, our understanding of human intelligence is also evolving. Intelligence is one of the key attributes that distinguishes us as human beings. It is also the basis for fundamental developments such as scientific discovery, technological development, business innovation or social progress. Therefore, a better understanding of intelligence in our new technological context is of capital importance for both the understanding of ourselves as human beings and the advancement of our societies in all human spheres (politics, business, law, etc).
We can understand intelligence in different dimensions. A first dimension is the individual level or what we can just call human intelligence. A second dimension is the intelligence emerging from the connection and interactions between small or large groups of individuals, or what we can call collective intelligence. A third dimension is the intelligence that arises from the extension of human intelligence through the use of artificial intelligence systems, or what we can call augmented intelligence. Finally, augmented collective intelligence is the “three-dimensional space” resulting from the combination of the three previous dimensions.
Each one of these dimensions of intelligence has its own fields of research, p.e. brain research in the case of human intelligence or organizational design in the case of collective intelligence. At the same time, technological development is also expanding our understanding of human intelligence (p.e. through brain Magnetic Resonance Imaging) while enabling the design and testing of new forms of collective intelligence (like Decentralized Autonomous Organizations) and augmented intelligence (like Natural Language Processing). Furthermore, the combination of all these developments will open new spaces of exploration yet unknown.
All these developments will have profound implications for both individuals and organizations. Anticipating these implications will allow us to be better prepared to manage them properly.
II. Research methodology
This research initiative will follow these methodological principles:
A multidisciplinary approach to connect the state-of-the-art of different disciplines with the goal of identifying emerging patterns.
Based on a historical perspective so as to start from the historical foundations of science and technology in each field.
Leveraging a network of experts in the different fields to gather relevant input and insights in each area.
Oriented towards practical recommendations for organizations and individuals.
Progressing on an iterative model to deliver incremental results.
III. Research workstreams
a) Understanding human intelligence
The first dimension to analyze is the individual intelligence that characterizes us as human beings. It involves two key themes: the physical basis of intelligence (the brain and the nervous system, as studied by neuroscience) and its metaphysical basis (represented by consciousness, as studied by psychology and philosophy).
Beyond these two fundamental elements, we can also analyze the process of learning (as the basic process that allows us to grow our intellectual capacities) and the philosophy of knowledge (so as to understand the different paths to acquire knowledge and their limits). This space is currently being disrupted by new technologies such as advanced neuro imaging techniques, new brain-computer interfaces and new learning systems or EdTech.
I will now focus on the two key themes of human intelligence: neuroscience and consciousness.
Neuroscience
Modern neuroscience is the result of the convergence of various scientific disciplines such as anatomy, embryology, physiology, biochemistry, pharmacology, neurology and psychology. Its origin goes back to Alcmaeon of Croton in the 5th century BC, who discovered the optical nerves and proposed that the brain was the center of thinking. But we could argue that modern neuroscience starts with the first elaboration of the neuron doctrine made by Santiago Ramón y Cajal based on his neuroanatomical research at the end of the 19th century, and later corroborated by Charles Scott Sherrington from a neurophysiological perspective. The neuron doctrine stated that the individual neuron is the structural, functional and perceptual unit of the nervous system. This doctrine became widely accepted and was the conceptual basis for the progress made over several decades in our understanding of the brain, including the localization of different functions (such as vision, speech or motor control) in specific regions of our neocortex.
Nonetheless, over the last decades a new paradigm has emerged putting the focus not on the individual neuron but on the connectivity between neurons, that is, the ensemble of neurons or neural circuits. The central idea of this new paradigm is that neural circuits -in which one neuron receives inputs from many other neurons while sending its outputs to a large number of other neurons- generate emergent properties that define the functional basis of the brain. These neural networks are modular and hierarchically organized. Moreover, synaptic plasticity allows them to evolve and compose over time. This new line of thinking suggests that network science and complexity science may have a relevant role to play in our understanding of the functioning of the brain.
The reference person for me in this interesting field is professor Rafael Yuste, who launched the BRAIN Initiative sponsored by the Obama administration to fund neuroscience research over 500 institutions from 2013 to 2025. I have had the privilege of knowing Rafa over the last years and he has always been an inspiration. Also, María Asunción Pastor, Professor of Neurology at Universidad de Navarra and Director of its NeuroImaging Laboratory, is giving me valuable advice and support in this field.
Consciousness
One of the perennial questions of neuroscience is what is called the hard problem of consciousness, as defined by the philosopher David Chalmers. Chalmers argues that personal, first-person experience is not logically entailed by lower order structures and functions; it is not the sum of its physical parts, including the brain and the nervous system. Although many neuroscientists deny the existence of such a hard problem and argue that consciousness will be solved in the course of further analysis of the brain and human behavior, I agree with Chalmers that consciousness belongs to a higher level of hierarchy than the physical realm.
This statement may appear unscientific to some people but I will argue it is not. The conflict arises from the reductionism of science -very extended in Western culture- that limits all possible knowledge to that empirically verifiable with our physical senses. In reality, the adequate verification of data -as presented by Ken Wilber- is based on: (i) instrumental injunction, (ii) intuitive apprehension and (iii) communal confirmation. And this proof of data does not necessarily involve the sensory-empirical dimension. As an example, mathematics are widely accepted as a solid scientific field in which knowledge is not built on the basis of sensory-empirical data but on the elaboration and apprehension of mental concepts. In a similar fashion, I believe that consciousness will never be understood through sensory-empirical science as it pertains to a higher order structure that can only be analyzed from the mental and transcendental realms.
Starting from this idea -present in different cultures in the perennial wisdom of humanity-, one can analyze the different levels of consciousness that emerge in the development of the human being through its mental and transcendental stages, as well as the different forms of knowledge that derive from them. A fundamental characteristic of these levels of consciousness is their hierarchical and holonic -as defined by the essayist Arthur Koestler- nature: each level is simultaneously a whole in and of itself as well as a part of a hierarchically superior whole. Or in other words, each higher level of consciousness integrates and transcends the previous one.
I will not enter here in more detail on this subject but rather refer to Ken Wilber, father of integral psychology, who is my reference person in this field. Particularly, his book Eye to Eye covers the previous subject in a very rigorous manner. On this topic, I also count with the valuable guidance and support of Gonzalo Rodríguez-Fraile, founder of the Foundation for Consciousness Development and an expert in the field.
b) Understanding collective intelligence
The second dimension is the collective intelligence that emerges from the interaction and collaboration between a group of individuals. The assumption here is that the intelligence of the group is superior to that of any individual. And that the larger, more diverse and more interconnected the group, the higher its level of intelligence.
Accordingly, the key themes in this field are those that are useful to characterize the dynamics of collective behavior: organizational design (as the basic structure that shapes the way a group of individual operates seeking a common purpose), values and culture (as the beliefs that individuals share so as to behave following common principles and norms) and leadership (as the capacity people have to inspire others to follow them in the pursuit of their goals).
Although traditionally the way a group of individuals organizes its activities has been described using linear organizational charts, in reality the behavior of individuals in an organization is better defined as a network of interactions and collaboration. As stated by the Finish sociologist Esko Kilpi, who studied the relational vision of the firm: “The future architecture of work is not the structure of a corporation, but the structure of the network (...) People from the whole network can contribute pieces of their time, creativity and expertise to ongoing events according to their interests, availability and experience, working in a transparent environment. (...) Work will be described as complex patterns of communicative interaction between interdependent individuals.” From this perspective, network science appears again as a relevant discipline to understand the behavior and development of human organizations.
But technological development may also have a relevant role to play in the future development of human organizations, paving the way for new forms of collective intelligence to emerge. Blockchain technologies and crypto networks are most known by their capacity to create alternative monetary systems that can potentially disrupt our current financial system. But beyond that, their essential characteristic is their ability to decentralize human collaboration, starting with the basic functioning of a smart contract and scaling to the broader concept of Decentralized Autonomous Organizations. Therefore, the crypto ecosystem can be seen as an open space for the experimentation of new forms of governance and collective decision making.
I will now focus on what I consider to be the two basic themes in this field: human organizations and blockchain technologies.
Human organizations
I believe organizational design should be a key discipline in management. While all the talk about digital transformation typically revolves around culture, leadership and talent management -all of them very relevant topics-, not so much attention has been paid to organizational structures. However, the org chart -with all its roles and reporting lines- is typically a very telling expression of the culture of a company, while leadership is often very conditioned (or even constrained) by the structures that vertebrate an organization. Even the most basic processes of internal communication and collaboration are greatly influenced by the design of the organization. All of this is just to make the point that the structural design of an organization is of paramount importance in the emergence of its collective intelligence.
Being this the case, the reality is that most organizations today (either in the private or the public sector, or even in the third sector) are still designed following the principles that inspired the industrial revolution in the 19th century: the specialization of work and the hierarchy of decision making. For this reason, over the last years many organizations have launched different attempts to dismantle all this structure of hierarchy and functional segregation. In the case of BBVA for example, during the last 7 years a process of agile transformation -trying to incorporate the methodologies of agile development first, and then its values and culture across the whole organization- has greatly transformed the organizational blueprint of the bank resulting in a less rigid and more liquid and transparent organization.
One of the key elements in all these processes of organizational transformation is putting learning at the core of the organization. As much as learning is a very relevant process in our understanding of human intelligence, organizational learning is also a critical capability in the development of collective intelligence. This was well analyzed by Peter M. Senge back in 1990 in his book The Fifth Discipline: The Art and Practice of the Learning Organization.
The main challenge for these processes of transformation to succeed is always change management. People tend to associate their comfort zone with whatever little space they own in the overall org chart. And when the org chart is deeply restructured, uncertainty and anxiety arise. The basic human desires of control and recognition have to be narrowed down or transcended in order to adapt to a new form of organization that is less dependent on hierarchical structures. This very directly connects with the level of consciousness of the individuals who are part of the organization, as the evolution of any organization towards higher forms of collective intelligence goes hand in hand with the development of the consciousness of its leaders and members. This fact has been brilliantly described by Frederic Laloux in its book Reinventing Organizations, particularly in the first chapter that presents an interesting analysis of the development of ever higher forms of human organization along the history of humanity in parallel to the development of new stages of human consciousness, very much in line with Ken Wilber’s views about consciousness mentioned above.
All these ideas are just a starting point to think about collective intelligence, as the concept goes beyond the field of organizational design. My references in the field are professor Thomas W. Malone and executive director Kathleen Kennedy at the MIT Center for Collective Intelligence. On the very connected area of network science, I am following Albert-Laszlo Barabasi -probably the biggest contributor to the field over the last decades- and the Santa Fe Institute, an interesting research institution devoted to complex systems science with a multidisciplinary approach.
Blockchain technologies
Traditional hierarchical organizations have many flaws (ineffective communication, organizational silos, predominance of politics, lack of transparency…), but one could argue that at least they have a simple and clear governance model: decisions are made top-down by a small and well defined group of people. On the contrary, emerging organizational models that promote a flatter, more open and more collaborative model (such as agile organizations, teal organizations, exponential organizations, holacracy, etc) face the challenge of implementing decision-making processes that involve hundreds or thousands of people in a practical and efficient manner.
Being this a clear challenge, nature offers us many examples of how this may work. One such example -as described by Deborah M. Gordon, professor of biology at Stanford University- is how ants are able to work collectively to perform many tasks (such as collect, process and distribute resources) without any central planning but as the result of continuous interactions among individuals. An illustrative case is how ants use pheromone as the signal to activate a positive feedback that rapidly amplifies the number of ants on a specific trail. Similarly, I believe that blockchains and crypto networks can create the necessary signaling -through network incentives, consensus protocols and immutable ledgers- for a large number of people to act collectively in a decentralized model. Bitcoin mining is a good example of this.
Although blockchain is often referred as a new technology, in reality it is not so new -bitcoin was created 12 years ago- and not properly a novel technology but rather an innovative combination of previously existing technologies: P2P networks, public key cryptography, hash functions and Merkle trees. This innovative puzzle was first introduced by Satoshi Nakamoto (pseudonym) in the original bitcoin white paper as a solution to the double-spending problem that all previous attempts to develop a digital cash system had faced. But solving this problem did not only open the possibility to create a digital cash system. More profoundly, it allowed the creation of a protocol for the open exchange of value -similarly to how the TCP/IP protocols enabled the creation of the internet for the open exchange of information.
The first step in this direction was the creation of a platform to develop smart contracts, as Ethereum was originally conceived when launched in 2015. But it was soon understood that the aggregation of smart contracts could lead to the creation of a whole organization that could autonomously operate based on a decentralized governance model: the so called Decentralized Autonomous Organizations or DAOs. This trend has opened the possibility to experiment with new organizational and governance models, taking what until very recently were purely academic ideas (such as liquid democracy or quadratic voting) to a real test based on blockchain technologies. One of the most interesting platforms supporting this wave of governance experimentation is Aragon, which has been already used by many decentralized finance projects to create their own DAOs.
We are still in the early days of crypto networks and DAOs, but I believe that these new technological platforms will help us develop new forms of decentralized organizations with more effective governance models. Nonetheless, any technology is only as good as the use we make of it, so the development of these new models will have to go hand in hand with reaching higher levels of collective consciousness for this to work.
My reference person in this field is Jorge Izquierdo, co-founder of Aragon and a good friend who has always given me great insights about this space. I also follow the research of crypto VC firms like Placeholder VC and Fabric Ventures -who have a deep view of this noisy sector- among many other crypto teams.
c) Understanding augmented intelligence
The third dimension to analyze is augmented intelligence, that can be understood as the extension of human intelligence through the use of artificial intelligence systems. Artificial intelligence (AI) is a discipline that has been around for many decades, but it has only been during the last 10 years that AI has achieved outstanding results in solving previously intractable problems. Nonetheless, my view is that today -and for years to come- we are living in an era of augmentation, i.e. a world more characterized by AI algorithms embedded in human-led processes than by independent and autonomous AI systems.
The recent success of AI has been driven by several factors: (i) the availability of huge amounts of data thanks to the digitization of society, (ii) the increase in computation capacity coming from the development of GPUs, and (iii) the innovation in the development of algorithms. Scaled data and scaled computation have allowed us to train very large neural networks (i.e. networks with billions of parameters, such as GPT3 that has 175 billion parameters), while algorithmic innovation has made it possible for neural networks to run much faster. Looking ahead, quantum computing may similarly make a major impact boosting machine learning to new levels of performance, particularly to solve simulation and optimization problems.
Hence, the two key themes in this field are artificial intelligence and quantum computing.
Artificial intelligence
The goal of creating machines with reasoning capacity dates back to the 17th century with Leibniz, who devised a language for the symbolic representation of concepts and ideas (characteristica universalis) and a logical calculation device based on that language (calculus ratiocinator). Following Leibniz’s vision, several inventors and mathematicians such as Charles Babbage (inspired by the Jacquard machine to develop a general-purpose computer), Georges Boole (who laid the foundations of Boolean algebra), Alan Turing (who set the basis for the architecture of computers), Claude Shannon (the father of information theory) and John Von Neumann (who evolved the architecture of computers with his universal constructor), all paved the way for the emergence of artificial intelligence as a new discipline in the Darmouth Conference.
The meeting celebrated in Darmouth College in the summer of 1956 brought together scientists and engineers like Marvin Minsky, John McCarthy, Allen Newell and Herbert Simon. They coined the term artificial intelligence and started to explore it following different paths based on the symbolic representation of the world. Probably the highest expression of this symbolic school was the General Problem Solver created by Newell and Simon, which was received with great enthusiasm but ended up disappointing the expectations placed on it. New approaches were developed and tested -like the expert systems introduced by Edward Feigenbaum- until the 80s, when the focus shifted to the alternative paradigm of connectionism and neural networks. After years of research and development in the field of neural networks, deep learning finally had real success with the ImageNet project (a large visual database developed by Fei-Fei Li and her team at Princeton) and the 2012 ImageNet competition won by AlexNet, a deep neural network that achieved high performance by using GPUs. Since then, deep learning has proven to be very successful in many different applications such as computer vision, speech recognition, natural language processing, drug design, medical image analysis, material inspection… producing results comparable to and in some cases surpassing human expert performance. Note: for more information about the history of AI, I recommend Rafael Pardo Avellaneda’s paper “La trayectoria de la Inteligencia Artificial y el debate sobre los modelos de racionalidad”.
But despite all this progress, deep neural networks are still what we can call narrow AI (i.e. AI that outperforms humans in very narrowly defined tasks) rather than general AI (i.e. AI able to perform across different contexts and problems). In fact, I believe we are still far away from developing general AI systems and that they will probably not be achieved by simply scaling the computational size of current neural networks -this may well require the development of new AI modeling paradigms.
At the core of this AI revolution is data. Machine learning models are as good as the labeled data used to train them. Despite all the attention given to convolutional neural networks (CNNs), generative adversarial networks (GANs), transformers, etc, the hard work to build good AI models is typically linked to data management. Any organization seeking to exploit the opportunity of AI beyond a few marginal use cases to show off has to make a big effort in data architecture, data governance, data ingestions, data labeling, data security, data privacy, etc. The other key element for AI to make a real impact is what is called machine learning ops, i.e. building an end-to-end machine learning development process to design, build and manage reproducible and evolvable software powered by machine learning models. Machine learning ops thus unifies the cycle for machine learning and software application release, which is critical to scale the use of machine learning models in production environments.
Following this path, the organization can start to incorporate machine learning models in its processes and operations giving rise to augmentation. Some AI models may directly augment the capabilities of humans (such as next-best-offer tools supporting the activity of sales specialists) while others do so indirectly by optimizing internal processes (like spam filtering in email communication).
My reference person in this field is Andrew Ng, who was the founder of the Google Brain initiative, chief scientist at Baidu and has been a professor of machine learning at Stanford for many years. I also count with valuable insights from Paco González-Blanch, a good friend who is product manager of Alexa AI at Amazon and an expert in the field.
Quantum computing
At the end of the day, machine learning models are just software. And software has significantly evolved over the past 20 years thanks to open source, agile, devops, cloud, microservices… But an even more fundamental change is coming in the next years due to quantum computing.
Quantum computing is a new form of computation that leverages specific properties of subatomic particles described by quantum mechanics. Classical computers have achieved outstanding results in the past decades, but for some problems above a certain size and complexity we do not have enough computational power on Earth to tackle them. A new kind of computation is needed to try to solve these problems, and quantum computers try to do so by leveraging the quantum phenomena of superposition and entanglement to create states that scale exponentially with the number of quantum bits or qubits.
According to the principles of quantum mechanics, subatomic particles are set to a definite state only once they are measured. Before a measurement, particles are in an indeterminate state that can be represented as a combination of states -that we would ordinarily describe independently- called superposition. At the same, entanglement is a counter-intuitive quantum phenomena by which a pair of particles share their state in a way that when you measure one of them, this automatically triggers a correlated state of the second even if the two are at a great distance. These two properties -plus the phenomenon of quantum tunneling- are the basis of quantum computing as a new form of computation that encodes information in a new and different way, allowing us to speed up the resolution of certain types of problems such as optimization problems (p.e. financial market analysis, manufacturing applications or transportation optimization), simulation applications (p.e. drug development, fertilizer production or material durability), security applications (such as more robust encryption schemes or quantum communication) and machine learning (p.e. audio generation, image generation and predictive models).
Quantum computers were first proposed by physicist Paul Benioff in 1980. Richard Feynman and Yuri Manin later suggested that a quantum computer had the potential to simulate things that a classical computer could not. Once Benioff, Manin and Feynman opened the doors, physicists such as Juan Ignacio Cirac began to work on the possibility of experimental quantum computation and mathematicians like Peter Shor started to investigate the nature of the algorithms that could be run on quantum computers (like Shor’s algorithm for integer factorization). Despite experimental progress during the last two decades, quantum computers always appeared to be a distant dream until public and private investment made a leap in recent years. Big tech companies like IBM and Google and quantum startups like Rigetti are seriously working in producing commercial quantum computers in a timeframe of 3-5 years. And quantum annealing systems like the ones developed by D-Wave are already a reality. The good news is that these companies are already opening access to their hardware in the cloud, as IBM does through its IBM Quantum Network or Amazon offers with Amazon Braket by partnering with Rigetti.
Although there are different types of quantum computers, most efforts are focused on either ion trapped machines or superconducting technologies. Both types of systems still face many technical challenges as they have relatively short lifetimes, they are very error-prone and sensitive to the environment and they face limitations due to complex hardware and software. The expectation is that these technical problems will be solved in the coming years, but as we do not know yet which type of technology or system will win this race, companies like Zapata Computing are focusing on building quantum software that is agnostic to the underlying hardware. As quantum computing gives rise to a new software paradigm, it will also be applied to machine learning through developing quantum versions of artificial neural networks, developing entire quantum algorithms for pattern recognition or simply running classical machine learning algorithms in quantum computers to gain speed.
My reference in this field is the work done by IBM Research, led by Dario Gil whom I know from my years at BBVA. I also follow the work of other companies in the field and listen to insights from Carlos Kuchkovsky, a good friend and expert in the field.
d) Exploring augmented collective intelligence
As stated at the beginning, augmented collective intelligence is the “three-dimensional space” resulting from the combination of the three dimensions of human intelligence, collective intelligence and augmented intelligence. Going beyond this concept, we can try to elaborate a better definition based on the ideas previously outlined in this post:
Augmented collective intelligence is the intelligence emerging from an interconnected network of human beings -characterized by body, mind and consciousness- and software applications -based on machine learning models- that continuously interact and collaborate to achieve a shared purpose, following a decentralized model in which interactions are driven by agreed incentives and consensus is reached through given protocols, in continuous adaptation and evolution by learning from interactions with the environment, and whose development comes from reaching higher levels of individual and collective consciousness.
This does not pretend to be a closed definition but just a first proposal to iterate on it. We can also anticipate some attributes of augmented collective intelligence as the following:
Connectivity. Intelligence is all about connectivity. The human brain is based on connectivity between neurons. Consciousness is in its essence the unity that connects all reality. Human organizations are the result of connectivity between people. Crypto networks excel at connecting the interests of millions of individuals. Machine learning algorithms learn through the connectivity of neural networks. And quantum has discovered a very special type of “connectivity” called entanglement. In summary, intelligence is best represented by networks, and networks grow by expanding their connectivity.
Hierarchy. The connectivity behind intelligence is not flat, but it rather has a hierarchical nature. This hierarchy is the opposite of the one discussed when talking about traditional organizations, since the hierarchy of intelligent networks is built bottom-up instead of top-down. This means that in intelligent networks hierarchy emerges as a result of collective action, rather than being an exercise of individual power from the top. Once again, the human brain is a good example of a hierarchical network, while teal organizations -as described by Frederic Laloux- can also be understood as such.
Holarchy. Going a bit further, we could say that the hierarchy of augmented collective intelligence is not a hierarchy of top and bottom but a hierarchy of wholes and parts, i.e. a holarchy. As mentioned before, a holon is something that is simultaneously a whole in and of itself as well as a part of a larger whole. Holons at one level are made up of the holons of a previous level, but all levels are ultimately descriptive of the same reality. As an example, live matter can be described as subatomic particles, or atoms, or molecules, or cells, or organs, or organisms, or communities… Similarly, augmented collective intelligence could build from the most basic level of individual network agents to higher levels of complexity through a holarchical process.
Data and signaling. Similarly to how ants use pheromone as a signal to amplify the use of a specific trail, augmented collective intelligence uses data as the signal to transmit information and promote the collaborative behavior of network agents. A good example of this is Numerai, a crypto project that gathers signals (data feeds) about the stock market from thousands of data scientists around the world so that the best ones are used to build a hedge fund. Data signaling is therefore a basic process for network incentives and network consensus to work. As another example, The Graph, also a crypto project, has specialized in giving access to Web3 data signals to decentralized projects through open APIs.
e) Implications for organizations and individuals
The purpose of this research project is not merely academic but oriented towards practical implications and recommendations for both organizations and individuals.
I believe that augmented collective intelligence will become a defining factor of organizations of all kinds -private, public, non-profit, cooperatives, etc- in the next decades. Understanding it will help us anticipate what organizational forms may arise in the future, what new challenges they will bring with them, how they will transform existing organizations, what impact they may have on individuals, etc. All these questions are very relevant for the future of work, economic development and social progress.
IV. Final remark
As stated at the beginning, this is just my first approach to the topic. I welcome comments about the ideas outlined in this post as well as suggestions regarding people and institutions to follow in the different fields. You can leave a comment here.
You can also sign up here if you don’t want to miss future posts about this research initiative.
Enhorabuena Ricardo por el post y gracias por compartir y tocar este tema tan interesante y necesario para todos...
Según te leía lo que más me resonaba es que para los que somos un híbrido entre tecnología y factor humano, debemos poner foco en aliar máquinas y personas y crear las condiciones necesarias para fraguar una inteligencia colectiva. Y para que ello ocurra, (entiendo por lo poco que sé sobre la materia) y lo que se está demostrando es que la inteligencia de grupo unida a las máquinas resuelven conflictos en los negocios, abordan mejor cualquier tipo de amenaza y por ende genera nuevas profesiones, nuevos métodos de pensamiento y nuevas formas de abordar el trabajo. Y de ahí, los que estamos trabajando día a día en elevar un poquito más nuestro nivel de conciencia (como individuos) y compartirlo de forma colectiva, tenemos que trabajar en encontrar nuevos métodos de pensamiento y en ser facilitadores para que todo esto ocurra.
Hay una fórmula "casera" que he leído en varios sitios y que me ayuda a comprender un poquito más la Inteligencia colectiva que dice: IC = Grado de diversidad en las inteligencias múltiples / Grado de diversidad en el objetivo individual frente al colectivo.
Y sobre esto me surge una reflexión y es: ¿Qué métodos/cadenas de pensamiento se pueden generar en el individuo para que desde su individualidad fomente y propicie la IC? ya que si lo identificamos, se puede conseguir potenciar, motivar, acompañar a los equipos de trabajo para despertar la conciencia y con ello elevar un poquito más la conciencia colectiva.
¿coincides?
De nuevo, muchas gracias!!!
Un tema muy interesante, en el que sugiero para el apartado de conciencia, revisar la filosofía budista, que no pocas veces se ha confrontado con la ciencia y han encontrado coincidencias. Respecto a la inteligencia colectiva, preocupa que la base sea la que, a través de la democratización que ha traído la tecnología, tome las decisiones con base en sesgos emocionales e irracionalidad (tan ampliamente tocada por Dan Ariely), y traiga resultados colectivos perjudiciales como el brexit y, la llegada al poder de propuestas populistas que a todas luces no son viables de cumplir, o linchamientos o exaltaciones colectivas sin sustento. Ahí lo importante es primero educar a esa base decisora.