Innovation – the “Special Forces” model and Pateur’s Quadrant: impact for TEL in Higher Ed

parachute_jump_special_forcesInnovation is described by some as ‘connecting the dots’. The iconoclastic chief of the Virgin Group, Sir Richard Branson, uses the mantra “A-B-C-D. (Always Be Connecting the Dots).” The magic in this recipe is seeing the dots in the first place, since most people view a subset of what’s really out there and work within that framework. It is a criticism of much of higher education that students are taught to ‘collect the dots’, rather than connecting them (Seth Grodin, Stop Stealing Dreams) .

Companies have made numerous attempts at building innovation groups within them, usually without success. Some were iconic failures. Others were extraordinarily influential, just not for the company that paid them. Xerox PARC comes to mind. But the most enduring innovation company comes from the least likely of places – the US Government, and specifically the Department of Defense in the Defense Advanced Projects Research Agency – DARPA. It was founded in the shadow of the Russian launch of Sputnik, with a simple mission “to prevent and create strategic surprise.”
The special forces reference pertains to giving the local R&D groups the independence to make decisions ‘on the ground’ as the work they are doing dictates. This includes budget redirection, hiring, shelving unproductive research directions and revising them toward new ones. In short, it’s about giving the people leading these groups the permission to creatively respond to the emerging conditions in real-time.  Summarized in list form, the three principle characteristics of DARPA-like organizations on which their success rests are:

     – Ambitious Goals
    – Temporary Project Teams
    – Independence

These are critical observations in the HBR article by Regina E. Dugan and Kaigham J. Gabriel, former leaders at DARPA.  They have taken these lessons learned and translated them to the Motorola Mobility group’s Advanced Technology and Projects (ATAP), which Google picked up in 2012. They address what they believe are the ingredients of the ‘secret sauce’ DARPA has discovered and why most industries and businesses have failed to replicate it.

We believe that the past efforts failed because the critical and mutually reinforcing elements of the DARPA model were not understood, and as a result, only some of them were adopted. Our purpose is to demonstrate that DARPA’s approach to breakthrough innovation is a viable and compelling alternative to the traditional models common in large, captive research organizations.

Relevance to Higher Ed

It might be logical to translate this to the university context and to the organizations or units within it that try to address emerging technologies and their application to issues of teaching, learning and entrepreneurship. There is some utility in this, but it’s unfortunately not a simple parallel with mappings of DARPA processes to university practices. Were it ever that easy.

Dugan and Gabriel remind us of Pasteur’s Quadrant, developed in by Donald Stokes back in late 90’s, where he argued convincingly that recognizing the importance of use-inspired basic research  a new relationship can be established between science and government. (see Pasteur’s Quadrant: Basic Science and Technological Innovation).  The crux of the argument is illustrated by the Cartesian graphic below

donald-stokes-pasteurs-quadrant-diagramThe upper right quadrant is the sweet spot in this model, labelled after Luis Pasteur for his work advancing microbiology while coming up with practical advancements such as discovering the principles of inoculation, pasteurization of milk (from whence the term comes), and microbial fermentation. DARPA ‘lives’ in Pasteur’s Quadrant.

In the university context, there are a few laboratories and centers that thrive in this space. Some bridge the boundary between Pasteur and Edison, such as MIT’s Senseable City Lab, led by Carlo Ratti. The majority, however, inhabit the upper left quadrant, Bohr’s Quadrant, characterized by pure or basic research. That has been one of the problems that many accountability minded legislatures find difficult, articulated most succinctly by newly elected Governor Ronald Reagan when he wrote in 1967

taxpayers shouldn’t be “subsidizing intellectual curiosity” at universities.

to which the LA Times replied

If a university is not a place where intellectual curiosity is to be encouraged, and subsidized, then it is nothing.

The challenge is that most higher ed institutions are confronting rapid changes in areas such as big data, analytics, and computational algorithms, but these rarely find their way back into the course teaching and learning practices of the academy.

Informing the T&L space based on advances in data & learning analytics, visualization, and cognitive sciences is tricky. On the one hand most organizations who own this responsibility are in Edison’s quadrant. They are focused on applications of practical value, often articulated by the caution not to ‘experiment on’ the young charges that are in faculty classrooms. Their goal is to move best practices of established value more widely into the realm of the iconic space where learning purportedly takes place – the classroom. Nevermind that there is substantial data to suggest that the classroom is among the last places that substantive learning happens.

A Digression

I’m reminded of a recent visit to a well known research university where in the company of a computer science colleague we visited a variety of groups as part of an information gathering trip about inter-disciplinary innovation. We spent part of the day in that university’s teaching and learning center and then moved on to another group. On arriving at the next stop of the itinerary I explained to our new host we had just visited the teaching & learning center. The professor looked me in they eyes, and paused for a long minute before saying

some people would say on this side of the campus that if you’re teaching, students aren’t learning…

Boundary Crossing

Getting innovation to ‘happen’ requires stepping outside the square of one’s own design or thinking paradigm. One way to do that is find people who can engage with you but from either the edges of your current domain focus, or outside it altogether.  For example Philips had established a significant market share in PET, CAT, and related medical visualization technologies.  But incremental improvements weren’t achieving further expansions of their market as competing companies had technology improvements and incremental advances didn’t translate into major expansion of the market or large increases in profits. Just the opposite. Substituting newer technology, developed a significant cost, for older technology couldn’t significantly advance either the competitive value of their scanners nor qualitatively improve the patient experience. Instead they sought to technologies that might enable the creation of products and services that people would find more meaningful than current offerings. They asked , “Will a new approach, not just to the technology but to the entire problem space in which the technology is embedded transcend existing needs and give customers a completely new reason to buy a product?”

This is closer to the situation of higher education because most technology enabled learning support organizations are not doing novel, discovery oriented research. That’s the province of the faculty or departmental research labs and institutes. Partnerships with these discovery-focused research facilities can be exhilarating and valuable. But they are not the primary work or intellectual space for the application of learning sciences and new practices to the learning mission of the academy.

The trap of applied research learning science groups is the trap of  incremental innovation. These are well within the existing frame of reference, representing slight improvements that return immediate pay offs, even if relatively small.  The key idea is that the framing of the problem has not been altered, only the exact steps toward getting to a better solution.

In the case of Philips the researchers took a step backward or sideways. The improvements in computed tomography (CT) scanning were steadily advancing. In fact the number of images that a CT scanner could capture with each rotation of the X-ray tube had increased sixteenfold from its introduction in the early 1970’s through early 1980’s and the rotation speed had doubled (improving the machine’s ability to compensate for patients’ movements).  It would continue to improve, but so would those of their primary competitors. What else could improve scan results or speed the process overall?

For many getting a CT is a profoundly anxiety provoking experience. You aren’t getting one because things are going well. Further the process is foreign from any normal person’s experience, full of strange machines, injections, and loud noises. The result of all this? Patients don’t lie still when on the scanning bed. No matter how accurate the scanning technology gets, fidgety patients lead to lousy images more time to capture decent ones, and an overall unpleasant experience.  For some, especially children, the patient has to be sedated – more time, additional expense, and further negative response to the whole diagnostic experience.

Expand the frame. What is the totality of the experience and how can addressing these other elements enhance the core effectiveness of the incrementally improving technology? The answer was to make the experience leading up to and within the CT scanner more engaging and distract the foreign and fear inducing strangeness it tended to produce. By using LED displays, video animation, RFID (radio-frequency identification) sensors, and sound-control systems, the patients experience was now the focus.

For example, when a child approaches the examination area, she chooses a theme, such as “aquatic” or “nature.” She is then given a puppet containing an RFID sensor, which automatically launches theme-related animation, lighting, and audio when she enters the examination room. The theme can also be used to teach the child to stay still during the exam: In the preparation room, a nurse may show a video of a character on the sea and ask the child to hold her breath when the character dives underwater to seize a treasure. Projecting the same sequence during the exam helps the child hold her breath and lie still at the right moment. (Roberto Vergante,  Designing Breakthrough Products, HBR,

The result was patients stayed more quiet, fidgeted less and the picture quality improved dramatically. Fewer patients required sedation, making the process overall shorter on average.  Change the frame, solve a different but related problem, and make improve a process that includes the one that was the initial focus of attention.  But to do this required boundary crossers.

These people have much of the core knowledge base of the primary researchers but who are approaching the problem from a different perspective. In the case of Philips and the CT example, they certainly had their share of doctors, hospital managers, engineers of medical equipment, and marketing experts. What they brought to the table to augment this were architects, psychologists, contemporary interior designers, LED technologists and media specialists, interaction designers, and game oriented interactive hardware and software designers.

To work creatively in Edison’s quadrant within higher education requires reframing the problem space. Yes, the applied researcher typically working in Edison’s Quadrant is looking for practical solutions, and often isn’t looking necessarily to understand why something works in detail, just that it does. In higher ed, because of the nature of our work and our cultural context, we generally do care about why something works – to a point. It has to have credibility because our customers are researchers in their own right, experts in their domains. This is not limited to the STEM disciplines. Faculty in the humanities and performing arts are knowledge creators, using different methodologies, with different criteria by which understanding and meaning are created and assessed. But they are usually looking for logical as well as intuitive consistency that cannot be dismissed with “it just works, but I don’t know why.”

There are critical elements in the “special forces” approach that are critical to groups trying to apply what we know and what we’re learning about cognitive sciences and learning to enhance the undergraduate experience. Indeed, a portfolio of work that includes ambitious goals, temporary project teams or “hot teams”, and independence are necessary ingredients. But so to is the focus on applied innovation, problem solving in the practical world of undergraduate education, and sitting on the boundary of Pasteur’s and Edison’s Quadrants where the work is creative, socially meaningful and pragmatic.


Verne Burkhardt, (2009). “Design Thinking for Innovation: Interview with Tom Kelley, General Manager of IDEO, and Author of The Art of Innovation and The Ten Faces of Innovation“. Design Thinking Blog.

Interview with Tom Kelley, General Manager of IDEO, and Author of The Art of Innovation and The Ten Faces of Innovation

Regina E. Dugan, and Kaigham J. Gabriel, (2013). “Special Forces” Innovation: How DARPA Attacks Problems, HBR,

Donald E. Stokes, Pasteur’s Quadrant – Basic Science and Technological Innovation, Brookings Institution Press, 1997

Roberto Vergante, (2011).  “Designing Breakthrough Products”, HBR,

Posted in Uncategorized | Leave a comment

There’s lies, damn lies and statistics

graph of spurious correlation

Clear correlations “proven” statistically.

There’s a reason I stick with my Twitter feed. It brings up articles that I otherwise would have missed. Take these two.


It’s an interesting re-analysis of a climate change article by Lewandowsky, Gignac, & Oberauer, (2013) presented by Dixon and Jones (2015), who concluded: “respondents convinced of anthropogenic climate change and respondents skeptical about such change were less likely to accept conspiracy theories than were those who were less decided about climate change.” This was in response to Lewandosky et. al. who had shown a robust linear relationship between climate change rejection and conspiracy theory ideation.

The rebuttal by Lewandowsky, Gignac & Oberauer (2015) seems pretty compelling. The essence of the argument is methodological. When do you choose the statistical model to use and believe? The main point that Lewandowsky make is:

“Dixon and Jones’s core argument is that the relationship between the two variables of interest, conspiracist ideation (CY) and acceptance of climate change (CLIM), is nonlinear, and that the models reported for both surveys were misspecified. To reach their conclusion, Dixon and Jones first make three questionable data-analytic choices to cast doubt on and attenuate the linear effects reported, before they purport that there is nonlinear relationship after reversing the role of the variables of interest in the statistical model for the panel survey. No statistical or theoretical justification for that reversal is provided, and none exists.”

So you can choose a different model but if you do, you better have a compelling reason for doing so. Dixon and Jones didn’t.

Hence, Lewandowsky et. al, conclude:

“In summary, Dixon and Jones’s analysis has no bearing on the results we reported for either survey because it reaches its main conclusion only by reversing the role of criterion and predictor without any theoretical justification. The only statistical justification offered for that reversal (“with nonlinear models, it is important to explore relationships in both directions”) demonstrably does not apply. Without that reversal, Dixon and Jones’s criticism involving nonlinear relationships is moot because none are present.”

The main point was elegantly stated a bit earlier in the article.

“Any correlation matrix can be fit equally well by more than one model. This issue of equivalent models has been discussed repeatedly (e.g., Raykov & Marcoulides, 2001; Tomarken & Waller, 2005). The consensus solution is to limit the models under consideration to those that have a meaningful theoretical interpretation (MacCallum, Wegener, Uchino, & Fabrigar, 1993). Alternative models should reflect alternative theoretically motivated hypotheses, any mention of which is conspicuously lacking in Dixon and Jones’s Commentary.”

There are lies, damn lies and statistics. If you are going to use statistics wisely you better have a good theoretical model on which to base you proposed analysis. In the absence of that any conclusion drawn is suspect.

J.B.S. Haldane, “[T]he Universe is not only queerer than we suppose, but queerer than we can suppose.”

Sorry that at least the second response article of these two is proprietary rather than open access. If you have access to a library that carries Psychological Science here are the relevant citations:

The open one with a CC BY NC is:

Ruth M. Dixon and Jonathan A. Jones. Conspiracist Ideation as a Predictor of Climate-Science Rejection: An Alternative Analysis
Psychological Science May 2015 26: 664-666, first published on March 26, 2015 doi:10.1177/0956797614566469

and, this one in rejoinder is behind a pay wall. 😦

Stephan Lewandowsky, Gilles E. Gignac, and Klaus Oberauer. The Robust Relationship Between Conspiracism and Denial of (Climate) Science. Psychological Science May 2015 26: 667-670, first published on March 26, 2015 doi:10.1177/0956797614568432

Posted in Uncategorized | Leave a comment

The University as a Learning Design Problem

UTA_TorchBearersOne of the things I’ve enjoyed in getting to know the community at the University of Texas Austin, is the energy that exists among fellow faculty to rethink the undergraduate experience. This is particularly challenging at a large, research intensive state university.  And it is especially true when such state universities have activist legislatures with a strongly conservative bent.  As Mark Twain once said, ” No man’s life, liberty or property are safe while the legislature is in session.”

The Campus Conversation, an activity started by President Bill Powers in 2014, with three primary goals in mind:

  1. How do we implement changes to curriculum and degree programs?
  2. How do we evolve pedagogy for 21st century learners?
  3. How do we create more opportunities for interdisciplinary and experiential learning for our undergraduates?

I have been drawn to one of the six faculty working groups in particular, that which is addressing the question of establishing a Teaching Discovery Innovation Center.

Teaching Discovery Innovation Center

Focused on the creation of a faculty-led innovation center, this committee is engaging leaders and stakeholders across campus and externally to accelerate UT Austin’s advancement in this area.

We are approaching the point after multiple conversations among an diverse group of interested faculty of proposing a way forward using the method “90 day innovation process” developed by the Institute for Healthcare Improvement (IHI).

I had the pleasure, and it truly was a great pleasure, to spend time with a colleague who I highly regard at Georgetown University, Assoc. Provost Randy Bass, who is leading a marvelous project entitled “Redesigning the future(s) of the University“. They are doing some very critical thinking about the impediments to teaching in ways that are high impact, expansive (in the sense of including both university and the rest of the world in their conduct) and empowering.

The Georgetown Redesign project describe their experiments in curriculum design and the undergraduate experiences as ‘pump-priming ideas’, and they’ve started with five of them.

  1. Flexible Curricular and Teaching Structures
    1. These might include teaching courses in shorter modules and combining modules such that a set of two or three end up being a traditional semester in length. Or it may be unbundling the credits into individual units to better fit into the pacing of a students learning (taking a 6cr. ‘course’ and breaking it into a three 2 credit units that could be taken together or separately).
  2. Competency-based learning – this one is relatively self explanatory.  In Georgetown’s perspective it would consist of one or more of the following elements:
    1. Explicit learning outcomes with respect to the required skills and
    2. A flexible time frame to master these skills
    3. A variety of instructional activities to facilitate learning
    4. Certification and assessment based on learning outcomes
    5. Adaptable programs to ensure optimum learner guidance
  3. Expanding Mentored Research – Programs of study that shift from predominantly formal coursework to a substantially different balance of coursework and credit bearing mentored immersive learning through independent or collaborative projects.
  4. New Work/Learn Models – Programs of study that maintain or expand years to degree but include a substantial experiential component (e.g. workplace Co-op), dependent on Georgetown placement (in DC and globally), and guarantee both degree certification and intensive work experience on graduation.
  5. Four-year Combination BA/MA – Four-year combination BA/MA built around new configurations of online and self-paced learning, coursework and experiential learning.

This is really significant.

MIT has been beavering away at how the undergraduate experience needs to evolve through an Institute-wide Task Force, the product of which was the report  The Future of MIT Education:Reinventing MIT Education together. In it was a recommendation about greater modularity, as well. One of its attributes, in addition to allowing students greater influence on their own learning pathway, is the creation of structural ‘holes’ in the curriculum – making time and places for experiential learning.

“Recommendation 7: The Task Force recommends that this commitment to pedagogical innovation for the residential campus be extended to the world to set the tone for a new generation of learners, teachers, and institutions…..

<stuff removed>

a. Exploration of modularity based on learning objectives and measurable outcomes. In January 2014 Harvard and MIT released a report summarizing an analysis of the data collected during the first year of open online classes. Modularity refers to breaking a subject into learning units or modules, which can be studied in sequence or separately. The finding that drew the most attention is the low rate at which students who enroll in an MITx or HarvardX class complete it. The first 17 HarvardX and MITx classes recorded 841,687 registrations, of which only 43,196 (5.1%) earned a certificate of completion.

“While the completion rate is low, other data from the report suggests that students are focused more on learning certain elements of a class and less on completing what has traditionally been considered a module or unit of learning. For instance, in addition to those who completed a course through MITx or HarvardX, 35,937 registrants explored half or more of the units in a course, and 469,702 viewed some but less than half of the units of a course. The way in which students are accessing material points to the need for the modularization of online classes whenever possible. The very notion of a “class” may be outdated. This in many ways mirrors the preferences of students on campus. The unbundling of classes also reflects a larger trend in society—a number of other media offerings have become available in modules, whether it is a song from an album, an article in a newspaper, or a chapter from a textbook. Modularity also enables “just-in-time” delivery of instruction, further enabling project-based learning on campus and for students worldwide.”

Is there a pattern emerging?

— pdl —

Posted in curriculum_redesign, experiential learning, innovation, interdisciplinary_learning, mission, undergraduate_education, UT_Austin | 2 Comments

AAAS Vies for the Title the “Darth Vadar of Publishing”

AAAS is vying for the crown of Lord Vader or Chief Evildoer in its approach to suppressing cost-effectiveness_open_access_journalsopen dissemination of scientific knowledge, even when that knowledge is paid for by tax payer money in the first place. They claim to support open access. They redefine it to be a pay for publishing charge (APC)  of $3,000 USD and that restricts the subsequent use of the information in the article preventing commercial reuses such as publication on some educational blogs, incorporation into educational material, as well the use of this information by small to medium enterprises. If you really meant open access, the way the rest of world defines it, you’ll have to pay a surcharge of an additional $1,000.  But it gets worse.

Faux Open Access  Journal “Science Advances” (perhaps better “Poor Researchers Restricted”)

A new faux open access journal Science Advances is being launched next year that will, get this, charge an additional US$1,500 above the fees listed previously to publish articles that are more than ten pages long. Wait…. this is a born digital publication with no paper distribution.  They’re charging $1,500 plus the $4,000 to publish an open access article longer that 10 pages. It is bits, right? Their argument is that the freely provided peer review process is more difficult with longer papers so they should charge more for the effort, seeing as how they are getting their reviews for nothing anyway and this is just pure profit – and who doesn’t like pure profit? They claim that the additional ‘editorial services’ justify this additional surcharge.

What about publishing your data along with your paper? Why would enough detail to replicate experiments be important since when we do we often don’t get the original results verified anyway….

The AAAS commitment to open access is worth examining. Besides redefining it so that it isn’t open and accessible at all, they published a widely criticised study claiming the peer review process in open access journal is suspect.  The criticisms were methodological. John Bohannon, a correspondent of Science, published the results of an experiment or sting operation in Science in an article entitled “Who’s Afraid of Peer Review?” to expose the problems of open access publishing. You can read the full text here  – or you could if you have access to the journal Science which is behind a paywall. Those of you from university or colleges with a subscriptions to Science can gain access to it through your library, assuming they’ve been to afford to keep the online subscription. The rest, well, you’ll have to my word for it.

It Never Was an Open Access Study

The problem is it wasn’t study of open access journals at all. There was no population of other journal types to compare it to. He sent a bogus article to a group of open access journals, over 300 of them, from a list on the site the Directory of Open Access Journals. Fair enough. But none were send to proprietary journals and thus never explicitly compares open access model to the subscription model. What he’s pointing out, as the Martin Eve notes in the Conversation (a good article, btw)  that Bohannon is highlighting problems with the peer review process. These are not new and they are a major concern. But they are not related to open access any more than they are related to subscription economic models.

The bottom line is Science and the AAAS is trying to redefine open access to make it compatible with their highly lucrative current subscription model. They are levying APCs of $5,500 per article to “make a report openly accessible.” They’ve published a flawed ‘study’ to justify their actions and put up a website to support their mistaken claims ( They have hired a managing editor for their new ‘open access’ publication Science Advances who is openly critical of the open access movement. They have lobbied the UN to try and pressure the Special Rapporteur in the field of cultural rights at the United Nations, Farina Shaheed who is preparing a report on open access for the UN Human Rights Council, calling open access young (aka immature), experimental (aka risky) unable to demonstrate the benefits that to them clearly exist from traditional reader pays (aka subscription based) publishing models.

It’s time to recognise when a monopoly is trying to consolidate its position at the expense of the very people on whose work its prestige depends. Shame on AAAS.

Posted in open_scholarship | Tagged , , , | 1 Comment


This fall, in North America, a new open course is starting on open learning, the meaning of connection and in what ways is it really possible to engage in distributed learning at a distance.  There have been disparaging remarks at the degree to which innovation & learning is really possible at the scale that massive open courses have achieved. What’s innovative in the pedagogy that characterises learning patterns for at scale that are heavily “designed” and in so doing make the ad hoc small group discussion difficult if not impossible? More bluntly, how is video recorded lecturing to 100k learners an improvement in pedagogical practice?  The question raised is does scale prevent good pedagogical practice?

This is one of many questions I hope to explore with others in the upcoming Connected Courses learning event staring this September.

Posted in ConnectedLearn | Tagged | 3 Comments

The Post Digital Age

Phillip Long, Ph.D.
Institute for Teaching and Learning Innovation, UQx Project
The University of Queensland
10 August 2014
Image by  amattox mattox,,  Some rights reserved (cc by nc)

Image by amattox mattox,,
Some rights reserved (cc by nc)

It’s about time we got there. We started with the model of learners working either independently or in a close relationship with mentors, instructors, or teachers to incorporate from them knowledge, built upon scarce data that only their teachers knew. Knowledge and its power were a direct correlate of what you could remember and recall when the time and place for them was present.

But the growth of facts began to challenge even the most capacious human minds. We took to recording them, laboriously by scribes, onto an external storage medium. The effort made the scarcity costly, adding economic value to the mix. Among the knowledge elite this precious external storage environment was prized and guarded.  Practitioners of the passage of knowledge were seen at times silently mouthing words and sentences in a mystical and to some very frightening practice of ‘quiet reading’. They translated the coded representations of knowledge on the fly as their eyes danced across the storage medium bringing them to express things that those around them knew to be beyond their experience – and scaring the unlearned by the “power” in these new devices…. books.

The revolution of the printing press democratised access to information. It was no longer a matter of the rich and powerful to own knowledge. The transition took several hundred years, but it laid a part of the foundation for the explosion of knowledge that characterised the Renaissance. It also changed the way we think. What once was knowledge by virtue of memory and its recall was now possible to store outside the little grey cells in your cranium. We needed an indexing system to be able to track all of that externally stored information and mechanisms to use it for efficient retrieval.

Various mechanisms emerged that used descriptions of the location of the physical objects (the Persian city of Shiraz’s library, 10th century (1)), the location itself coded by numbers (Library at Amiens Cathedral in France (2)) or Thomas Hyde’s Incunabulum, a printed catalogue of the books in the Bodleian Library, Oxford University.  All of these were attempts to organise and make more accessible to humans the increasingly vast body of information accumulating in the world of knowledge creation.

We have always, as tool making creatures, used our ability to build things to improve and existence. Initially this was focused on survival, but as we became more capable it quickly spread to other aspects of making life easier, better and more fulfilling. To organise our knowledge we built repositories in the form of collections in libraries, and indexed them initially more arbitrarily, and later through a classification schema (e.g., the Dewey Decimal System).


The Jacquard head for dobby looms.

The advent of representing information more abstractly, in terms of binary coding of human readable characters launched the digital revolution. The initial physical manifestations derived from what was cutting edge technology of the period – gears, levers and pulleys gave way to re-appropriation of the loom for separating digitally represented holes as ones and their absence as zeros. The “Jacquard head” for dobby looms provided the in

sight between pattern in the abstract and the Jaquardian weaving that resulted. His key idea was the use of hole punched cards to represent a sequence of operations, leading to the Analytical engine of Charles Babbage and later the card tabulating machine of Herman Hollerith to perform the 1890 US Census.

Tying all this together is the use of technology to augment the human intellect. Fast forward to the end of World War II and the same concern for the proliferation of information and ways to find and use it rather than continue a cycle of rediscovery was expressed by Vannevar Bush in the classic “How We May Think” article printed in Vanity Fair, July 1st, 1945 (3). In it he proposed, based on the state of the art of technology of his day, the Memex, a machine to record knowledge by mimicking the human search process through what he termed Associative Trails. He wanted to record not just the artefacts but the way in which humans thought through the steps that led to the formation of those artefacts. Further he wanted to make these shareable so that others could see not just the result, but the process by which that result was derived.

It took another 23 years before the technology the day could at least attempt a physical implementation of this idea. It was presented to the world in a breathtaking live demonstration at the Moscone Center in San Francisco, California by Doug Engelbart in the “Mother of All Demos” (4).  The demo was an attempt to show, rather than talk about, something Engelbart wrote six years before in the landmark paper Augmenting the Human Intellect: A Conceptual Framework (5). In one 100 min ‘show and tell’, Doug demonstrated the introduction of the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor. Our world was forever changed.

Today we routinely offload memory into silicon. Some refer to this as the dumbing down of our intellect by Google (6) but we’ve been doing it for centuries – we just do it better and more efficiently today. And while it appears we’re mired in our devices, walking heads down into street signs as we text away, or sitting at the dinner table with friends when we’re ‘friending’ people who aren’t present through the devices formally known as phones, we are moving past that era.

What marks this shift? The post digital age is like prior radical transitions – it’s marked by the fact we no longer recognise it as different. Think back to when your parents had an icebox. They replenished it with block ice at least daily. And then something happened. Refrigeration. And in less than a generation we went from astonishment at this miracle, to forgetting the world was different in the time before it.


Zoom gesture comes naturally cc by nc sa Alec Couros,

Look at children playing today with their parents smartphones, or perhaps their own tablet computers. When they walk up to pictures now they try naturally to manipulate them with the ubiquitous thumb and forefinger spread to zoom the image. We walk around with digital sensors measuring our gate, altitude, and velocity and glance at our ‘phones’ (we need a new term for this) to see the dashboard of our activity.

More importantly, we are starting to see and think in ways before we couldn’t before because our devices are shaping what we conceive as questions. In 2011, we began to make ‘movies, by directly recording the impulses from the voxels in our brain to reconstruct the imagery recalled from the memory of movies we have seen (7). We are on the verge of communicating rich media from neural storage to the sensors that pre-process it in others. It’s not long before we have the capability to transfer these memories passing their biological encoding. Will these be ‘memories’ at all without this step? Will we perceive them the same as those created by our own neural infrastructure? We don’t know yet, but we soon will.

As the embedding of digitally enabled devices extends the concept of the internet of things (8) to interconnection of objects in a network with ourselves we silently enter the post-digital age. As David Foster Wallace wrote in 2008, “the most obvious, ubiquitous, important realities are often the ones that are the hardest to see and talk about” (9). Fish don’t see water, but we must.   Welcome to the post digital age.



(2) Joachim, Martin D., Ed. Historical Aspects of Cataloging and Classification, Volume 2 The Haworth Information Press, Binghamton, NY, 2003, p. 460.





(7) Nishimoto, et. al., (2011), Reconstructing visual experiences from brain activity evoked by natural movies, Current Biology, 21(19):1641-1646.

(8) Ashton, Kevin (22 June 2009). “That ‘Internet of Things’ Thing, in the real world things matter more than ideas”. RFID Journal.


Posted in innovation, post-digital | Tagged , , , | 1 Comment

Radically Transparent Research – or Why Publish Before Peer Review?

I was reading the Gardner Writes, as I follow my colleague and friend’s thoughts from the other side of the globe with great interest and anticipation of the thinking he forces me to do. He’s been on a multipart series kick lately, probably to break up a long piece of discursive writing that formed the spine of a report he wrote for his home institution, and thereby make it more accessible and easier to digest.

He made an aside, referencing a blog post by Dave WIner about why he (Winer) writes in public. This resonated particularly at this moment for me as we’re in the midst of writing a grant proposal for a funding body in Australia, one thrust of which is how to break the cycle of the traditional linear research methodology. In engineering education, the domain for which this proposal is targeted, as in most engineering and scientific disciplines, the process can be described as:

Conceive — Design — Implement — Operate — Analyze — Disseminate

(That’s a modification of the engineering methodology some of you may be familiar with from the CDIO consortium)

The point of this is that you follow your experimental protocol, conceptualising the hypotheses derived from a theoretical framework you think informs the work, design the experiment, its methodology to collect data that might inform and or refute it (them), translate the experimental methodology into something that might actually allow you to do the work, then conduct the experiment to collect the data and try out the methods you think will inform you about the questions you posed, analyze it when the experiment is over to see what came out, and then, perhaps with the others involved, you write it up to share it with your colleagues. This is the journal publication process that can take from as little as three to four months, to upwards of 18 months to 2 years. All up the cycle that you’ve just engaged in is a measured in years.

Since all of your colleagues in the community are likewise going through this process to explore their own hypotheses the asynchronous and overlapping time lines for this work naturally leads to the sharing of results occurring through the stages of the cycle. In fact it’s more likely than not that some of this new knowledge is likely to have value to your work, indeed to potentially influence it, and, if you weren’t fixed to the methodology you’re following so that you can get results that are experimentally sound, you’d likely have changed something along the way to leverage the knowledge that you’ve just read and discussed with the authors via email or at that conference you both attend annually.

All up the two year period that it takes to devise, conduct and report on your work IS as much of a problem as any aspect of the work itself. If the point of all of this is to learn and improve the educational process, the chances of doing that meaningfully in our life times is low. After all, many of the outcomes from this cycle aren’t going to be particularly informative – two years to report that you really didn’t find anything significant this time around is a long time in work life of an academic, and even longer in the educational trajectories of our students.

Dave Winer described why he writes his blog saying,

I write to express myself, and to learn. Writing is a form of processing my ideas. When I tell a story verbally a few times, I’m ready to write it. After writing it, I understand the subject even better.

The connection to why he writes and why we publish is similar. But the problem is the timescale. Blogging is relatively rapid. The short cycle time helps us share our thoughts and forces us to consider the things we’re thinking about. And, it invites others to see, consider, and respond to what we’re thinking about.

I write to give people something to react to. So you think the iPhone was a winner from Day One. Great. Tell me why. Maybe I’ll change my mind. It’s happened more than once that a commenter here showed me another way of thinking about something and I eventually came around to their point of view. And even if I don’t change my mind, it’s helpful to understand how another person, given the same set of facts, can arrive at a different conclusion

That’s the discursive dialogue that we’re missing in science, but which has been developed and understood in the open source community for some time. That’s the community from which Winer comes, and the transparent sharing of thoughts and ideas is part of the culture, the method of open source development. And that’s what’s missing from the higher ed learning research community. Our livelihoods are tied to the articles we produce, to the impact factor of the journals in which they are published, the attribution of the ideas to ourselves as the original authors. We sacrifice the enormous potential the community could give us by holding fast to the belief that if we share it openly we’ll see our intellectual contribution devalued, lost, or worse, stolen, by someone else claiming the idea as their own. We worry about our ideas being ‘stolen’ while in reality the fact that we put them out there in the first place establishes our provenance.

Once up on a time, when the mechanism of production and sharing of ideas took huge amounts of capital, significant effort to create the production value we sought in quality work, and complex, costly, and time consuming mechanisms to distribute the work to our colleagues in via the journals trucked/mailed/shipped to libraries in higher ed institutions around the world, the concern about provenance of an idea was more meaningful. Recall the Darwin/Wallace conundrum that surrounded the first published comprehensive expression of the idea of evolution by natural selection to the Royal Society in London. (Not familiar with this extraordinary coincidence of paradigm shifting ideas co-occuring? It’s a fascinating story, and one that owes a tremendous amount to Charles Lyell. Darwin was resigned to be ‘forestalled’ in getting the idea of natural selection as the driving force that works on natural variation (mutation) out to the world as his work. The process by which this was addressed, the compromise that emerged and the thoughtful intervention and guidance of Lyell is a lesson in ethical conduct, friendship and skillful political savvy).

The grant we’re writing is in part an attempt to demonstrate that there is another, more promising way to conduct this enterprise. The open notebook science movement has been around for some years – we are NOT claiming novelty in this. We are simply trying to appropriate the methodology and apply it to the research on learning design in engineering education. In this we’re adapting the work done by others, the most recent of which I’ve been reading is from the blog of Mel Chua. Here she writes

Radical transparency refers to the cultural practices used by healthy open communities (Free/Libre and open source software, open content, and open hardware projects) to expose their work in as close to realtime as possible and in a way that makes it possible for others to freely and non-destructively experiment with it.

From Mel Chua: Hacker. Writer. Researcher. Teacher. Human jumper cable.

We have to take the collective wisdom of the community and carefully apply it more rapidly to improve learning design for higher education students. We don’t want to put at risk the students in our courses, as is often raised as a “show stopper” concern by those who think that unless there is incontrovertible evidence of improvement in teaching approaches that are different, it’s better to stay with what works. But does it? That is, does it work? How do you know? And is there really the risk being asserted? As a colleague and co-writer of this grant once said in a panel discussion with me about the change in his teaching using the ‘flipped classroom’ approach,

Students won’t let you ‘fail’. They will raise concerns early and loudly is something isn’t going well. Any teacher worth being front of the room will respond and change what they are doing to avoid the catastrophe. Things may not go as you planned, but they won’t end up damaging the students in the class because they, and a good teacher, won’t let that happen.

So where is the real risk? It’s in not being open and transparent to learn together.


Posted in Open_Research | Tagged , , , | Leave a comment