What is Computer Science?
Computer science as a field started in the early 20th century with the first general purpose electronic computers, and the theoretical models of Turing, Church and others. As ideas of computational organization and architecture were developed and notions of (hardware and software) layers of abstraction evolved, the field burgeoned and grew in many directions, spawning many subfields (architecture, programming languages, algorithmic design, complexity theory, and so on). Overall, though, the fundamental object of study in the field is the computational system, which can be thought of as a kind of machine inside a box — we feed inputs into the box, and receive outputs from the box (computed by the machine).
The evolution of the field can be understood as the progressively deeper understanding of, and the ability to engineer, such computational systems. This can be best understood as growth along three interrelated dimensions: power, trustworthiness, and reach.
Power is the ability to perform larger and more difficult computations. Faster and larger computing machines increase power, as does the development of faster algorithms. Complexity bounds enable us to better understand the limits of computational power, and programming language structures enable effective construction of more powerful software systems.
Trustworthiness refers to what extent we can ensure that our systems are reliable and trusted by users at the task(s) they are designed and deployed to perform. Cybersecurity measures work to ensure trustworthiness in the face of external threats; program verification and software engineering techniques to ensure it against (inevitable) human error. Consideration of human factors is also essential for computational systems to be trusted by users who are not privy to the system’s internal workings or design process.
Reach is about how widely computational systems touch and influence human activities. The earliest electronic computers had limited reach, serving a limited number of needs of government and large businesses, while today we carry powerful hand-held computers serving a great variety of personal and social needs, and the growth of IoT promises to increase computing’s reach even more.
Growth in each of these dimensions requires, influences, and constraints growth in the others. Increasing power or reach opens up new gaps in trustworthiness that need to be addressed, while increasing trustworthiness and reach tends to require more computational power.
Current Trends
We can understand the implications of current trends in computing via this tripartite lens, as well as the relationships between these developments.
- Recent successes in machine learning with big data have increased the reach of computing systems into new segments of society and the economy, therefore also highlighting new aspects of trust that need to be addressed, in achieving fairness and dealing with bias in analytical and decision support systems.
- Edge computing is about increasing computational power by redistribution of computation, concomitantly enabling the increase of reach, thus also requiring deeper attention to questions of trust in cybersecurity.
- The complementary tech areas of Internet of things and intelligent spaces also increase reach greatly into our lives and daily activities, raising questions of trust related both to cybersecurity and to how understandable and predictable their responses are to us.
- Similarly, digital manufacturing increases the reach of computing into the physical realm and gives us new questions of trust in cybersecurity and complex logistics management system that need to be solved, as does digital twinning.
- Development of more realistic virtual and augmented reality systems promise a radical increase in computing’s reach, and profound social impact. This has been enabled by increases in computing power, and to reach full realism (however defined) will require even more power (cf. edge computing). It will also raise new questions of trust, particularly around privacy, as well as the effects of trust breaches when computing is tightly integrated with everyday activities
- Quantum computing promises greater power, and specifically power which undermines public-key cryptography, a main pillar of current computational trustworthiness.
- On the other hand, blockchain and other distributed ledger enable the creation of distributed systems that can be trusted, even if individual actors using the system are not (zero-trust information security), and that are robust to losses on the network (due to distribution). This uses the power of the network to create systems that increase trust and thus enables increased computational reach (via new financial and other applications).
The Next Step
Where, then, is the edge of the field where innovative research will move the field forward?
At the macro level, we see a great deal of possibility in all three main themes: Computational power has reached the limits of Moore’s Law, and so is ripe for a deep conceptual shift, whether through quantum, parallelism, non-von architectures, or some idea yet to be discovered; Computational trust is strained (at best) by increases in power and reach of computing - cybersecurity is an eternal arms-race, catalyzed further by demands for efficiency, and increased use of automation (whether AI or otherwise) raises new and evolving questions of algorithmic trust; Computational reach has increased enormously through the ubiquity of computing devices, both personal and IoT, which spread computation to new aspects of our societies, economies, and lives.
In fact, if we consider the effects of many of the current trends identified above we see one theme emerging overall — the growing intertwining of computational systems with human systems (individual, organizational, social). Personal devices (phones, watches, etc.), IoT, and intelligent spaces deeply connect our daily and minute-to-minute activities with computational adjuncts; as virtual/augmented reality becomes good enough for broad use, this integration will leap even further. In industry, digital twinning and digital manufacturing put computational models at the center of physical industry. And, while use of information technology has a long history in finance, the way blockchain has enabled creation of non-governmental currencies, and how modern data analytics has increased the power and reach of high-frequency trading and development of complex financial instruments, has transformed the industry (and left regulation somewhat behind for now). Similarly, e-discovery is transforming staid law offices, computer vision food safety as well as law enforcement, and machine learning systems are being applied (rightly or wrongly) to a whole host of societal, political, and business problems.
Thus, the broad paradigm shift that is needed is a change in how we think about the object that we study in computer science. Rather than it being a metaphorical glass box containing just a computational mechanism, receiving inputs and providing outputs to an external user
we need to expand the box we consider to include also the user and their behavior, as part of a larger, more complex computational system
and even beyond the user, also their social/organizational context
This system has enormously more degrees of freedom, and is not fully controllable by the computer scientist, but cannot be ignored, due to the tight interconnections between the core computational system (hardware, algorithm, data structures) and the information flow and incentives induced in its human users. This view of the proper object of study of our field we may term socially integrated computing, in that the human and social context (understood broadly to include all relevant human-human connections and interactions) are taken as integral to the computational system to be analyzed and designed.
There is, of course, much work in this vein already, in HCI, in social network analysis, in agent-based modeling, in computational economics and mechanism design, and so on. I believe, though, that these disparate types of research work can best be understood together as aspects of a shift in how we view the field as a whole — socially integrated computing — and that such a unifying view ought to transform it, and its effect on the world, for good.