Saturday Morning Reading


Image credit: Susan Murtaugh, Phil reading on iPad, CC BY ND

Saturday mornings are a time when I sit down with a cup of coffee and do some ‘lateral reading’. What does that mean? I have some initial ideas of where I want to start reading but I then follow leads, links in Twitter, etc. to wherever it takes me. I periodically ‘reset’ back to the topic list that is in my ‘todo’ list but it’s dialectic between curiosity and projects on my mind.


I thought I’d trace the patterns of this as I’m curious about what others do when they are sitting down to recharge, explore and move some of your project work forward.

cartalk_logoSaturday morning lateral reading usually lasts until the early afternoon. It begins after a light breakfast (bagel & fruit usually) while listening to NPR  Weekend Edition followed by Car Talk Classic. My usual routine involves reading the news on my iPad (NYT, The Guardian, WP), checking what’s new on Twitter and FB, reading selected journal TOCs news bits and depending on the research articles an  article or two (Science, Nature, CACM, Psychological Science in the Pub Interest, etc.).  Then the todo list emerges to vie for attention.

Yesterday went something like this. I read the news sources that are my routine goto sources for the events of the day. While eating breakfast and going through the stories I listed to a caller on Car Talk asking why, when she was driving on a rural road in thunder and lighting storm, a lightening strike hit in front of her on the road she was driving and didn’t hit her car. She thought her car, being a metal box would have been the more likely target.  This led initially to a discussion about lightening strikes in general and the direction they strike (from clouds to earth or earth to clouds).

Ray said he though it was actually earth to cloud in direction which Tom, with his infectious laugh thought was ‘bogus’. This led to some banter for a bit and prompted me to look up information on the formation of lightening and it mechanism of discharge via a search using Chrome leading to the Earthscience Stackexchange site. There a really well written post by Vikram (4-14-14) described the process by which from the cloud end the build up of negative charges (electrons) coupled with a comparable increase in positive charges on the ground reaches the point where the cloud end releases a burst of negative particles that move earthward in stepwise pattern. They advance 50-100m, pause about 50 μsec and branches again searching for the path of least resistance toward ground. When they get close enough to earth the positive side arcs toward the nearest of these branching negative particle stepped leader.  When they connect, they have completed a path of least resistance and the ensuing flow transforms them into a plasma, generating enormous heat (50,000 Kelvin) and enabling the positive and  negative particles to flow toward their opposite polarity. The short answer is that the flow happens in both directions, but the flash we see as lightening is actually going from the ground to the clouds.

cacm_sex_algorithmAt that point I settled into my reading chair (a stressless chair with ottoman and a swing arm computer tray by a window, and picked up last month’s Communications of the ACM (CACM) which was bright pink and on the cover was the screaming headline “SEX is an algorithm”. Scanning the ToC I checked out what was inside and where I wanted to focus my time.  I was attracted to a couple of Viewpoint articles. I started with “Technology and Academic Lives”, by Jonathan Grudin, a Principal Researcher at Microsoft Research in Redmond.

The article was about rise of ‘busyness” in higher ed, that is, the sense that most of us have, along with what I suspect everyone else in the western world, that our days are increasingly packed with more and more stuff to do in less and less time. All of this contributes to the loss of concentrated thinking time spent discussing things with local colleagues. Local is the key descriptive term here. Grudin divides the past 46 years in to four time periods.

  • 1975 – Pre-internet era
  • 1979 – Pre-web era
  • 1995 – Early web era
  • 2015 – Information Age

It was a nice read that followed the theme of increasing computer communications inversely proportional to decreasing social f2f interaction. The pre-internet era by necessity meant that your primary intellectual stimulation and challenges emerged form discussions with your departmental colleagues and graduate students.  With internet (pre-web) the opportunity to connect with researchers in your specialty or sub-specialty around the world led to substantive discussion of your work focused among this distributed community. And with that, a cost in your social connectedness to your local community and even to some extent your grad students.

The 1995 period was marked by recommendations for new hires primarily based on external letters from people most in the department had loose or no ties. This was coupled by the continuing diminishment of one’s local community in relation to one’s research.

By 2015 data has proliferated to the point that an obsession with quantification emerges. Polarization increases between those who are and aren’t quantitatively focused. This is coupled by a sharp rise in assessing the impact of one’s work. The focus on good teaching has been replaced by a rise in the importance of avoiding bad teaching. Raising money has grown in significance making grant getting a priority and diminishing the stature and voice of those not as successful or interested in that side of the profession.

In summary the current status quo is marked by:

  • increasing importance of fund raising;
  • increasing significance of rankings;
  • specialization narrows interests;
  • collaboration across distance accelerates scholarship and discovery
  • distributed research teams comes at the cost of  local community and increase in weak ties

That rings true to my experience. Close knit local research communities are a thing of the past.

Grundin ends suggesting we think about new forms of interaction and assessment that are less impersonal and stressful. He uses the analogy of the martial art of Aikido where the forces focused on you are redirected to achieve positive outcomes and retain balance. Malcolm will like this reference.

Next up was a n article by Pat Helland entitled “The Power of Babble” about the proliferation of metadata and standards.There was a nice quote there from Dave Clark (MIT) about successful standards happen only  where they are lucky enough to slide into a trough of inactivity after a burst research and before a huge investment in productization.

Systemic changes in large computing systems require translation between two data representations, and that’s likely to be “lossy”. Often one builds a canonical representation that the old system needs to be converted to and then from that converted again to the new data structure.  That’s doubly “lossy”.

The article for me fell down at that point as Helland calls for simply becoming more relaxed about what you don’t understand accepting with pleasure becoming befuddled.

Next up were a couple of research in practice articles on distributed consensus systems, and the Paxo, Chubby Lock and Raft algorithms. The latter was referred to as “Paxo for humans” (:-)

A contributed article in CACM looked interesting on Spark: A Unified Engine for Big Data Processing. That was a harder read and


Image credit: William Starkey on Georgaph, CC BY NC SA

Finally from a tweet that came in toward the end of this a video interview by Steve Wheeler (from Plymouth University) with Yves Punie who keynoted the EDEN conference in Europe and spoke about digital competencies needed by learners and citizens in society today.

Punie described 21 digital competencies clustered in to five clusters:

  1. Understanding digital information, its authority and its critical evaluation
  2. Communicating in a digital world, learning how to collaborate and share
  3. Becoming facile with digital content creation, both as individuals and groups
  4. Understanding issues of safety, privacy, health and well-being in the cyberspace
  5. Digital problem solving including reflecting on what problems need to be solved

Punie noted that a recent survey of employers in the EU reported 37% of workers don’t have sufficient digital skills to do their jobs. This he indicated is a failure by the companies not providing professional development and training and in educational institutions not graduating digitally prepared workers or socially constructive digital citizens.

This was followed by reading some a paper about Embracing Confusion: What Leaders Do When They Don’t Know What to Do (Phi Delta Kappa) and some email.

That is probably representative of my morning-early afternoon work.  Sunday was more of the same, with more attention to my Todo List.

What do you do for lateral reading?


Posted in informal_education, interdisciplinary_learning, Uncategorized | Tagged , , | Leave a comment

Education 2020


image credit: Dragan CC BY

Post Factual Times of Magical Thinking –Dec. 14, 2016

[This is the tag line that will begin writings that fall into the strange world after Nov.7th, 2016.]


How do you address the possible futures of education in 2020?  We have a hard time figuring out what’s going to be happening next quarter, let along next year, or four years into the future. This was the task I was given by Campus Consortium for a webinar that took place today.

My preparation was some days of personal reflection, engaging in some email back and forth with colleagues, and reading.  I had 20 min after which there was some Q&A. It’s an odd format to do webinars. Depending on the platform your ability to have any sense of the audience at all is highly variable. In this case I’m told there were 112 logins from all over the world. Not bad, though perhaps not up to my colleague Bryan Alexander’s Future Trends Forum lofty standards 😉  After the presentation some questions from the chat room and some audio questions were fielded and I tried to address them.

Below are the talking points that I used to guide my speaking.


image credit: mayeesherr. future CC BY


  • Post-course Era & Inter-disciplinarity – Problems of today are solved within disciplinary boundaries. This will lead to an increase in inter-disciplinarity of the formal learning experience.
    • We are seeing colleges, schools and collections of departments being reorganized around “challenges”. ASU is a prime example. They might be simply re-instantiated as new business units but the dynamic nature of what are challenges worth addressing will change with much greater fluidity than what we’ve previously had in terms of disciplinary categories. In effect, the infrastructure of units and more importantly the membership of the items or elements in them is an aggregate property defined by the collection or resources and people that have that designation. It’s the inverse of the re-factoring underway today of the learning environment where courses are an emergent characteristic of the individual students who select a topic of study, not a bucket into which students are poured.
    • The emergence of interest in systems like Salesforce is driven in part by the realization of the learner as the organizing principle of university systems, learning or otherwise. The “course” as the organizational unit of learning is fading. It’s still important to be able to have that lens available, but it will no longer be a fundamental building block of the architecture. LMSs that don’t figure this out soon will be relics.
  • Academic Learning/Post-graduation Earning – Institutions, particularly public institutions, have increasing pressures to demonstrate value and accountability. This is leading to the pressure for greater clarity in the connection between the academic learning experience and the collection of capabilities that are developed which map into productive working & earning opportunities post-graduation – this is in the context of the pressure toward the ‘gig’ economy that will be met and shaped by the concerns for social well-being too often sacrificed in this trajectory. Where does this lead? It leads to a reversal of what we have called the ‘hard’ vs. ‘soft skills’
    • It also leads to a recognition that an individual learner must be considered a part of the institution’s student body, if you will, from the time they enroll and continuing for the rest of their lives. Transitioning their role from undergraduate student to ‘alumni’ may make certain marketing sense, but their increasing need to top up their skills, expand their capabilities with recognized certifications or even new degrees, means we need to treat them like core members of the learning community who simply have different tags associated with their current lifecycle status. That might be what we think we’re doing today, but the ease with which these individuals can transition in roles, participate in on-going learning opportunities of varying duration, with and without and accreditation will challenge this notion.
  • Recognition of Learning Achievements (RLA) – learning happens in many places, and in many contexts, not just the classroom. We know that, but we have

    image credit: Phillip Long, Bolonga_University3 CC BY NC SA

    failed to recognize it in sharable, transportable ways. The rise of micro-credentials backed by metadata developed from the badging world provides a pathway towards an “Open Architecture for the Recognition of Learning Achievements”.Behind this is the drive toward extended transcripts and various forms of recognition of achievement collectively referred to as badges, a synonym for the representation of micro-credentials. Like all of these activities, there is a technology component and an even larger instructional delivery and faculty culture components

    • Core elements RLA are:
      • the description of the learning outcome,
      • the rubric by which the achievement is judged or assessed, and
      • the evidence that the learner submits by which the rubric is applied.

      Extending the transcript is in effect transparently giving some insight into the decision rules and evidence of how the summative score or grade was actually determined, in a way that an independent outsider can understand and reasonably judge. Linking to this data is what the extended transcript is all about, and badging systems provide a ready infrastructure to accomplish this, needing only attention to integration.

      • The portability of this record of achievement in the future will be a major issue. Workers are working on average 4.4 years before changing jobs. They will work in something like 15 or more different jobs over the course of their working lifetime. Having to come back to every institution from whom they’ve earned a degree, certificate, CEUs, CMEs, etc., is a nightmare and can’t stand.
      • Enter the blockchain….
  • Growth of Learner Agency – Learners need to build their knowledge, literally and figuratively to be successful across their lifespan. To achieve that institutions will need to provide more integrated and connected experiences that enable students to ‘do the discipline’ instead of either hearing about what the discipline is, or listening to what others have done in it. The results of their achievements need to be associated with the learner, not solely the institution. This is in alignment with greater independence of the future work environment, and the need to construct their view of themselves and their learning achievements to employers and collaborators. It’s absurd that the demonstrating one’s achievements today requires contacting every degree, certificate, and learning or professional program to have those entities send ‘authentic records’ of your learning achievements to potential employers. As mentioned about given today’s average job duration of 4.4 years, this is crazy.

image credit: Phillip Long, CC BY

  • Continued advance & Ethical Challenges of Big Data and Analytics – there is no doubt that the computational capability to analyze big data is just beginning in higher ed. It’s really not “big” in comparison to astronomical data, nuclear physics, or economics. But it is qualitatively large step up in terms of educational data sets.  Serious concerns will need to be met and addressed in terms of privacy, security, and the ethics of the use of this big data. IMSGLOBAL Global Learning Data & Analytics Key Principles.
    • These principles include clarity of ownership of the data of learners. A challenge to many institutions is the assertion that learners own the data generated in the course of interacting with university systems. This a challenge because we act like the institution owns it, but we often say the student or learner owns it. Ownership without the ability to do anything to the data, however, is meaningless.
    • Other principles include
      • stewardship
      • governance,
      • access,
      • interoperability,
      • efficacy,
      • security and privacy, and
      • transparency
    • Team-based Course Development & the Learning Engineer: The collaborative design and development of the technology mediated learning experience is becoming an essential element of group course development and design. Whether in the digital surround to the residential learning environment or a more fully online distributed learning environment the demands of the design process are creating the need for the role of the “Learning Engineer”.
      • This change in design practice is predicated on the recognition that the role a faculty can only be stretched so far. It’s less and less realistic to believe an instructor can be the domain expert in their discipline, a productive researcher in that domain, an instructional designer, a UI expert, a learning scientist and a dynamic presenter. People may have many of these attributes but having them all is unreasonable to assume and difficult to find in practice.
      • What does that mean? It means functions need to be segregated into roles that support the faculty. One of the roles is the learning engineer – that is someone with the multidisciplinary skills of learning sciences, cognitive psychology, learning design along with the computational skills to bring these to a digital learning environment.

image credit: Andrés García, Isolation, CC BY NC

  • Personalized Learning & Social Context – A trend is emerging to meet the learner where they are, not in the mythical median represented by the average student. The ability to gather data, analyze it increasingly in real-time frameworks to provide relevant timely and predictively guided personalized learning pathways is both a holy grail and a chimera. It is appealing to provide desirable difficulties that are framed by the strengths and deficiencies of the learner’s current mind state but we have evolved over tens of thousands of years to be exquisitely social creatures. We have to retain and emphasize the social dimension of learning even in distributed online and so-called personalized learning environments.
    • The challenge here is personalization without isolation. Technology must expose and allow learners to be aware of where other learners are in their learning journeys and facilitate ways for ad hoc group formation that allow peer interaction and study to occur in the context of the intersection of their personalized journey with others.
  • Rise of Openness – the expansion of “open” is now moving beyond its roots in open source software and encroaching on open access (journals/publications), open science, open data, open educational resources and textbooks/publishing, more generally open scholarship. What is emerging is that transparency is an essential element of advancing knowledge, and the network effects of open sharing accelerate discovery, innovation and progress.  This is not a battle between commercial practices and open sharing. It’s about leveraging the two for sustainable strategies that leverage the power of “open”.
  • Exploitation of open: security and identity in an age of evil actors – this is the converse and threat to the power of open. This both a technical challenge and even more a cultural challenge. Protecting one’s identity and avoiding data theft has gotten much harder with the advent of the sloppiness in design of rushed to market IoT devices. The latest exploitation in the DDOS attack on Dyn exposes the fragility of our online infrastructure. Universities can continue to lockdown their services and built virtual moats around their campuses, or they can integrate more sophisticated defenses into the devices that connect them while remaining engaged with the world.

Some examples of technologies in support of these future trends: (these are NOT endorsements but illustrations)

Image | Posted on by | Tagged , , , | Leave a comment

AI applied to VA de-identified healthcare records

Yesterday, Nov. 29th, Flow Health announced a five-year

human artificial intelligence informs healthcare

AI used to find complex patterns in medical symptoms and treatment outcomes photo credit: Flickr – A Health Blog,, CC: BY SA

partnership with the US Department of Veterans Affairs (VA) to build a medical “knowledge graph” using AI to inform medical decision-making and train AI to personalize care plans.

This is a big data project that will examine the millions of VA records and look for associations between presenting symptoms and interventions with respect the outcomes that followed.

One has to be extremely careful here because it is simply looking a correlations. Causality is another thing all together. But it might reveal patterns that were just too indistinct without the ‘magnification’ of big data analytics to surface relationships. Follow up studies will be required to establish whether these patterns are meaningful. Still, it’s an area of promise that is possible with todays computational environment.

Posted in Analytics, big_data | Tagged , , , , | Leave a comment

Blockchains as a fresh angle toward centralizing power and wealth?

colorful blue blocks

Image credit: Philip Bouchard CC BY NC ND


Recently a post from Manuel Oretga in the blog Las Indias in English titled “The
blockchain is a threat to the distributed future of the Internet
” attacked what he sees as thinly veiled corporate centralization of the internet through it’s current darling the blockchain . Bitcoin, the initial target, is a mechanism by which those interested in centralizing power, control, access & wealth wield economic might. Big banks and those aligned with them, entities he refers to as “centralizers” (with a link to IBM’s blockchain finance work as an illustration of the definition) are building dependence on heavy weight infrastructure, a synonym for centralized industrial/corporate activity. Dependence on industrial infrastructure thwarts those seeking in the internet independence and autonomy.

Tiangong_Kaiwu_Coal_miningThe initial evidence of a centralizing control property of blockchains is the Bitcoin reliance on mining functions, primarily taking place in China, that create the coins that are the currency of Bitcoin exchange.

This is easy to verify when you look at the way that two Chinese “mines,” Antpool and DiscusFish/F2Pool, hoard more than half of the blocks created by the bitcoin blockchain

That this has emerged in the Bitcoin environment means that this method of financial exchange is controlled by those who have large amounts of capital and can invest in the infrastructure required by Bitcoin transactions. The permissionless distribution of blockchain transaction records to all participants masks the reality that it is just another centrally controlled system directed by those with the capital to create the currency.

The use case that is explored to validate these assertions is an application called Twister, a P2P microblogging platform that uses Bitcoin’s blockchain infrastructure.  There is a odd lead paragraph that introduced Ethereum and with it the notion of  ‘smart contracts‘, but it’s only tangentially related to the arguments that follow which instead focus on Twister which is using native Bitcoin software.  Apparently because they both use some form of blockchain that’s enough to tie them together and impugn Ethereum based on the Twister critique. That’s like criticizing Oracle based on critical analysis of MySQL because they are both using some variant of relational databases.  It’s obfuscation that isn’t germane to the argument. That happens quite a bit in looking at the citations offered (e.g., the aside below).

ASIDE – A quirky reference to ‘corporate developments’ leads the paragraph introducing Twister. The aside looks at that but it’s tangential to Ortega’s primary argument so you can skip it if you wish by not clicking on the link.

It’s important to recognize how Twister is using the blockchain.  It’s not what you might initially think. Twister is an alternative to Twitter. The blockchain is focused on establishing an immutable user name. Why? Because of a concern that people can masquerade as someone. The developer of Twister writes in an FAQ entry

“Therefore this other peer may try to deceive you by providing forged posts from other genuine users or to refuse to store or forward your own posts.”

The mechanism it prevent the forged posts concern is making the twister client check if each post is properly signed by the sender. And that is role cryptographic feature of the block he’s leveraging.

Ortega’s main concern, however, is that Twister’s use of this method requires that the blockchain ledger be transferred to your machine, since Bitcoin’s blockchain is permissionless, and every users gets the full history of transactions – in this case the full list of Twister users, encrypted of course.  Ortega’s concern is that his requires a lot of bandwidth (well, not now as the service is small, but if it took off it could).  The assumption is that having both the storage required to support this as well as the bandwidth to engage are both requirements that are “insurmoutable barriers”, barriers high enough to disenfranchise the average punter but trivial for the corporate giants of the likes of Google, Amazon and IBM.

Ortega’s concerns boil down to three points:

  1. Using blockchains requires bandwidth that is only easily accessible by big corporations, in terms of affordability and physical accessibility to the bandwidth.
  2. The size of the blockchain places too high a storage burden on the user, whilst being trivial to the corporation players.
  3. The distributed consensus algorithm of the Bitcoin blockchain is still subject to the 51% attack problem, if not directly in controlling commits then in harboring Bitcoins themselves and thus controlling the function of the transaction environment. It’s also energetically & computationally expensive


I’m less concerned about bandwidth in this instance as this is not specifically a blockchain problem. It’s an overall internet access problem. Of course that doesn’t mean people using the internet and needing bandwidth for whatever they’re doing aren’t affected. It’s also true that bandwidth follows development so poorer areas by and large have less bandwidth.

What it does mean is that one particular usage of bandwidth is not to me the entry point for solving a much larger internet access network issue. There are other directions more likely to motivate changes there – like healthcare. This is an issue in the same area as most inequitable wealth distribution problems.  It’s important but requires multi-faceted strategies to address.

Blockchain Size

Local storage capacity is a major concern for the millions of users and potential users around the world who don’t have inexpensive access to storage at a price that’s affordable.. But like Bandwidth, Blockchains are just on of many hundreds of applications that demand more storage capacity.

The Twister example that Oretga focuses on is a peculiar choice as a use case. The use of the blockchain there is for the purpose of establishing immutable user IDs so that no one can masquerade sending microblogging messages as someone else. The size of the blockchain in this instance is by design small. It’s hard to see this imposing an burden on the users of this P2P microblogging platform.

Ultimately I don’t think you build toward the future by constraining your creative solution space to what are current limitations. Don’t get me wrong. We have to address the forces that are retarding or unwilling to address access, bandwidth, and affordable storage systemically.  But leading the charge for that with blockchain storage requirements rather than the hundreds of other storage demands seems like an ill conceived strategy.

What it does question is exactly what we deem as essential to encrypt in the block itself. That’s an important question. I see future blockchain environments as hybrid solutions with the information written into a block guided by a minimalist design guideline, using URIs written into the block to point to secondary locations where information dense artifacts related to the block are stored.  That certainly opens up other places where attacks on the integrity of the system might be targeted, but that’s not a new problem.

There are examples from Monegraph & Everledger where data relevant to a block record is stored in places other than the blockchain itself. This is likely to be  a smart implementation move even though it does add complexity and opportunities for new points of attack.  The goal should be to put in the immutable block only that information sufficient to make a unique record that permanently records the event you’re trying to recognize. Blockchains are not simply a new form of database intending to replace the RDMBS.

3. Distributed Consensus

This is perhaps the most serious criticism of the use of the blockchain in the context of credentialing  or recognition of competencies. There is no need for the kind of Proof of Work (PoW) that is an integral part of the bitcoin cryptocurrency environment. Nor is it acceptable to predicate consensus in committing a record to the ledger on enormous expenditures of energy and raw computational power. Whether we limit the number of blockchains and devalue the ‘currency’ in an agreed fashion to bound the range of the investment required to achieve the right to create a block, or more likely we look at some of the emerging algorithms that are based on Proof of Stake (PoS), the current bitcoin PoW is consensus algorithm needs a suitable replacement.


Image credit: Earl McGehee,, CC BY NC ND

Image credit: Earl McGehee,, CC BY NC NDlearning. More on that in another post.

In the UT Austin pilot, development starting this summer, we’re beginning our explorations using the Ethereum environment, but looking at altering the block creation process to limit a block to a ledger record. There is more to our pilot as it involves badges and writing badge metadata into the block ledger, a databases for structured rubrics and a database for rich media (effectively a bag of bits S3 store) to capture different artifacts related to evidence of learning. More on that in another post.


Posted in academic_transformation, badges, blockchains, CBE, higher_education, innovation, UT_Austin | Tagged , , , , | Leave a comment

Inaugural Leadership Roundtable on Academic Transformation, Digital Learning, and Design: Towards The Creation of a Discipline?

I was privileged to attend a gathering recently at Georgetown University to talk about the creation of a new academic discipline around learning design.  What follows are some reflections from that stimulating meeting.

Image credit: Tony Brooks, Georgetown_NonHDR, CC BY 2.0

Image credit: Tony Brooks, Georgetown_NonHDR, CC BY 2.0

There is a flurry of work going on rethinking the space of learning technology and its role in designing learning experiences, conducting learning sciences research, and continuing or expanding the delivery of core services (e.g., video production, animation, and increasingly VR/AR experiences in 3D immersive visual spaces.

Examples include:


But the general result has been the same in both k12 classes and in higher ed – not much has changed. Instead of a tool for learning it has had to make the case that the technology eases the instructors job. Otherwise adoption is thwarted.[1]

There are tensions on a number of fronts. Efforts at the University of Michigan led by James DeVaney, Assoc. Vice Provost for Digital Education intend to gracefully ‘go out of business’. They want the integration of digital tools so pervasive that it no longer needs to be called out. And the location to which they refer is the academic departments.

The vision put forward at the Leadership Roundtable on Academic

Transformation, Digital Learning, and Design attempts to address academic department ‘ownership’ of ed tech research and applied innovation. In this case it involves establishing a new academic discipline all together rather than embedding it as a program in an existing discipline like Ed Psych, or Instructional Technology, where these and related disciplines tend to find homes in Schools or Colleges of Education – something that GU doesn’t have. That may well be a unique opportunity.

But there are cautions nonetheless. The proposal on the table, written by Prof Eddie Maloney, Executive Director of the wonderful organization the Center for New Designs in Learning and Scholarship (CNDLS, founded by Prof. Randy Bass many years ago), emphasized correctly that real innovation happens at the boundaries, of disciplines, research methods, or theoretical models. Instantiating a new graduate program in a department of Learning Design adopts the model of the academy that has served for hundreds of years.

Years ago Seymour Pappert wrote Why School Reform Is Impossible[2], and it in describes a realization that he had come to that “”Reform” and “change” are not synonymous”. Granted Papert’s focus is again on k12 but I don’t think it wise to dismiss this too hastily. He writes about “”assimilation blindness” insofar as it refers to a mechanism of mental closure to foreign ideas” and refutes Roy Peas conclusion that LOGO failed to live up to Papert’s predictions. Papert notes that the ‘grammar of school’ is a deep belief structure that is exceptionally difficult to dislodge and disrupt. It’s rather like the underlying philosophy of teaching and learning that all faculty have, whether self-aware of it or not.

Papert wrote,

Complex systems are not made. They evolve…. education activists can be effective in fostering radical change by rejecting the concept of a planned reform and concentrating on creating the obvious conditions for Darwinian evolution: Allow rich diversity to play itself out”.

What has me thinking is how do we enable the continuation of the creative, messy, but productive interplays at the edges of different systems? Are we sacrificing what makes the potential here so large by becoming another department in the contemporary academic higher ed institution? Does playing inside the square (a play on the Aussie phrasing) diminish our potential to change the organization, especially when we really don’t know the real details of the outcomes we seek? What purpose is this proposal serving? Is it a search for internal legitimacy? What agenda(s) will it enable? What risks are accompany the approach and what opportunities exist to mitigate those risks?

As a post-script I’m pleased to say that the presentation Eddie made to the GU curriculum committee for a new Masters in Learning Design was approved.  We will see in the coming months/years how this grafting of service and applied research through new form a hybrid academic department matures and impacts its surroundings.

One things is sure.  The community that has begun to form around it is rich, rewarding, and intellectually stimulating.  It’s a plus when that’s complemented by deeply generous and open people.

[1] Why Ed Tech is Not Transforming How Teachers’ Teach – Education Week, June 11,2015,

[2] Papert, Seymour (1999), “Why School Reform is Impossible”,  The Journal of the Learning Sciences, 6(4), pp. 417-427, last accessed 5-5-2016,


Posted in academic_transformation, higher_education, innovation | Tagged , , | 1 Comment

Innovation – the “Special Forces” model and Pateur’s Quadrant: impact for TEL in Higher Ed

parachute_jump_special_forcesInnovation is described by some as ‘connecting the dots’. The iconoclastic chief of the Virgin Group, Sir Richard Branson, uses the mantra “A-B-C-D. (Always Be Connecting the Dots).” The magic in this recipe is seeing the dots in the first place, since most people view a subset of what’s really out there and work within that framework. It is a criticism of much of higher education that students are taught to ‘collect the dots’, rather than connecting them (Seth Grodin, Stop Stealing Dreams) .

Companies have made numerous attempts at building innovation groups within them, usually without success. Some were iconic failures. Others were extraordinarily influential, just not for the company that paid them. Xerox PARC comes to mind. But the most enduring innovation company comes from the least likely of places – the US Government, and specifically the Department of Defense in the Defense Advanced Projects Research Agency – DARPA. It was founded in the shadow of the Russian launch of Sputnik, with a simple mission “to prevent and create strategic surprise.”
The special forces reference pertains to giving the local R&D groups the independence to make decisions ‘on the ground’ as the work they are doing dictates. This includes budget redirection, hiring, shelving unproductive research directions and revising them toward new ones. In short, it’s about giving the people leading these groups the permission to creatively respond to the emerging conditions in real-time.  Summarized in list form, the three principle characteristics of DARPA-like organizations on which their success rests are:

     – Ambitious Goals
    – Temporary Project Teams
    – Independence

These are critical observations in the HBR article by Regina E. Dugan and Kaigham J. Gabriel, former leaders at DARPA.  They have taken these lessons learned and translated them to the Motorola Mobility group’s Advanced Technology and Projects (ATAP), which Google picked up in 2012. They address what they believe are the ingredients of the ‘secret sauce’ DARPA has discovered and why most industries and businesses have failed to replicate it.

We believe that the past efforts failed because the critical and mutually reinforcing elements of the DARPA model were not understood, and as a result, only some of them were adopted. Our purpose is to demonstrate that DARPA’s approach to breakthrough innovation is a viable and compelling alternative to the traditional models common in large, captive research organizations.

Relevance to Higher Ed

It might be logical to translate this to the university context and to the organizations or units within it that try to address emerging technologies and their application to issues of teaching, learning and entrepreneurship. There is some utility in this, but it’s unfortunately not a simple parallel with mappings of DARPA processes to university practices. Were it ever that easy.

Dugan and Gabriel remind us of Pasteur’s Quadrant, developed in by Donald Stokes back in late 90’s, where he argued convincingly that recognizing the importance of use-inspired basic research  a new relationship can be established between science and government. (see Pasteur’s Quadrant: Basic Science and Technological Innovation).  The crux of the argument is illustrated by the Cartesian graphic below

donald-stokes-pasteurs-quadrant-diagramThe upper right quadrant is the sweet spot in this model, labelled after Luis Pasteur for his work advancing microbiology while coming up with practical advancements such as discovering the principles of inoculation, pasteurization of milk (from whence the term comes), and microbial fermentation. DARPA ‘lives’ in Pasteur’s Quadrant.

In the university context, there are a few laboratories and centers that thrive in this space. Some bridge the boundary between Pasteur and Edison, such as MIT’s Senseable City Lab, led by Carlo Ratti. The majority, however, inhabit the upper left quadrant, Bohr’s Quadrant, characterized by pure or basic research. That has been one of the problems that many accountability minded legislatures find difficult, articulated most succinctly by newly elected Governor Ronald Reagan when he wrote in 1967

taxpayers shouldn’t be “subsidizing intellectual curiosity” at universities.

to which the LA Times replied

If a university is not a place where intellectual curiosity is to be encouraged, and subsidized, then it is nothing.

The challenge is that most higher ed institutions are confronting rapid changes in areas such as big data, analytics, and computational algorithms, but these rarely find their way back into the course teaching and learning practices of the academy.

Informing the T&L space based on advances in data & learning analytics, visualization, and cognitive sciences is tricky. On the one hand most organizations who own this responsibility are in Edison’s quadrant. They are focused on applications of practical value, often articulated by the caution not to ‘experiment on’ the young charges that are in faculty classrooms. Their goal is to move best practices of established value more widely into the realm of the iconic space where learning purportedly takes place – the classroom. Nevermind that there is substantial data to suggest that the classroom is among the last places that substantive learning happens.

A Digression

I’m reminded of a recent visit to a well known research university where in the company of a computer science colleague we visited a variety of groups as part of an information gathering trip about inter-disciplinary innovation. We spent part of the day in that university’s teaching and learning center and then moved on to another group. On arriving at the next stop of the itinerary I explained to our new host we had just visited the teaching & learning center. The professor looked me in they eyes, and paused for a long minute before saying

some people would say on this side of the campus that if you’re teaching, students aren’t learning…

Boundary Crossing

Getting innovation to ‘happen’ requires stepping outside the square of one’s own design or thinking paradigm. One way to do that is find people who can engage with you but from either the edges of your current domain focus, or outside it altogether.  For example Philips had established a significant market share in PET, CAT, and related medical visualization technologies.  But incremental improvements weren’t achieving further expansions of their market as competing companies had technology improvements and incremental advances didn’t translate into major expansion of the market or large increases in profits. Just the opposite. Substituting newer technology, developed a significant cost, for older technology couldn’t significantly advance either the competitive value of their scanners nor qualitatively improve the patient experience. Instead they sought to technologies that might enable the creation of products and services that people would find more meaningful than current offerings. They asked , “Will a new approach, not just to the technology but to the entire problem space in which the technology is embedded transcend existing needs and give customers a completely new reason to buy a product?”

This is closer to the situation of higher education because most technology enabled learning support organizations are not doing novel, discovery oriented research. That’s the province of the faculty or departmental research labs and institutes. Partnerships with these discovery-focused research facilities can be exhilarating and valuable. But they are not the primary work or intellectual space for the application of learning sciences and new practices to the learning mission of the academy.

The trap of applied research learning science groups is the trap of  incremental innovation. These are well within the existing frame of reference, representing slight improvements that return immediate pay offs, even if relatively small.  The key idea is that the framing of the problem has not been altered, only the exact steps toward getting to a better solution.

In the case of Philips the researchers took a step backward or sideways. The improvements in computed tomography (CT) scanning were steadily advancing. In fact the number of images that a CT scanner could capture with each rotation of the X-ray tube had increased sixteenfold from its introduction in the early 1970’s through early 1980’s and the rotation speed had doubled (improving the machine’s ability to compensate for patients’ movements).  It would continue to improve, but so would those of their primary competitors. What else could improve scan results or speed the process overall?

For many getting a CT is a profoundly anxiety provoking experience. You aren’t getting one because things are going well. Further the process is foreign from any normal person’s experience, full of strange machines, injections, and loud noises. The result of all this? Patients don’t lie still when on the scanning bed. No matter how accurate the scanning technology gets, fidgety patients lead to lousy images more time to capture decent ones, and an overall unpleasant experience.  For some, especially children, the patient has to be sedated – more time, additional expense, and further negative response to the whole diagnostic experience.

Expand the frame. What is the totality of the experience and how can addressing these other elements enhance the core effectiveness of the incrementally improving technology? The answer was to make the experience leading up to and within the CT scanner more engaging and distract the foreign and fear inducing strangeness it tended to produce. By using LED displays, video animation, RFID (radio-frequency identification) sensors, and sound-control systems, the patients experience was now the focus.

For example, when a child approaches the examination area, she chooses a theme, such as “aquatic” or “nature.” She is then given a puppet containing an RFID sensor, which automatically launches theme-related animation, lighting, and audio when she enters the examination room. The theme can also be used to teach the child to stay still during the exam: In the preparation room, a nurse may show a video of a character on the sea and ask the child to hold her breath when the character dives underwater to seize a treasure. Projecting the same sequence during the exam helps the child hold her breath and lie still at the right moment. (Roberto Vergante,  Designing Breakthrough Products, HBR,

The result was patients stayed more quiet, fidgeted less and the picture quality improved dramatically. Fewer patients required sedation, making the process overall shorter on average.  Change the frame, solve a different but related problem, and make improve a process that includes the one that was the initial focus of attention.  But to do this required boundary crossers.

These people have much of the core knowledge base of the primary researchers but who are approaching the problem from a different perspective. In the case of Philips and the CT example, they certainly had their share of doctors, hospital managers, engineers of medical equipment, and marketing experts. What they brought to the table to augment this were architects, psychologists, contemporary interior designers, LED technologists and media specialists, interaction designers, and game oriented interactive hardware and software designers.

To work creatively in Edison’s quadrant within higher education requires reframing the problem space. Yes, the applied researcher typically working in Edison’s Quadrant is looking for practical solutions, and often isn’t looking necessarily to understand why something works in detail, just that it does. In higher ed, because of the nature of our work and our cultural context, we generally do care about why something works – to a point. It has to have credibility because our customers are researchers in their own right, experts in their domains. This is not limited to the STEM disciplines. Faculty in the humanities and performing arts are knowledge creators, using different methodologies, with different criteria by which understanding and meaning are created and assessed. But they are usually looking for logical as well as intuitive consistency that cannot be dismissed with “it just works, but I don’t know why.”

There are critical elements in the “special forces” approach that are critical to groups trying to apply what we know and what we’re learning about cognitive sciences and learning to enhance the undergraduate experience. Indeed, a portfolio of work that includes ambitious goals, temporary project teams or “hot teams”, and independence are necessary ingredients. But so to is the focus on applied innovation, problem solving in the practical world of undergraduate education, and sitting on the boundary of Pasteur’s and Edison’s Quadrants where the work is creative, socially meaningful and pragmatic.


Verne Burkhardt, (2009). “Design Thinking for Innovation: Interview with Tom Kelley, General Manager of IDEO, and Author of The Art of Innovation and The Ten Faces of Innovation“. Design Thinking Blog.

Interview with Tom Kelley, General Manager of IDEO, and Author of The Art of Innovation and The Ten Faces of Innovation

Regina E. Dugan, and Kaigham J. Gabriel, (2013). “Special Forces” Innovation: How DARPA Attacks Problems, HBR,

Donald E. Stokes, Pasteur’s Quadrant – Basic Science and Technological Innovation, Brookings Institution Press, 1997

Roberto Vergante, (2011).  “Designing Breakthrough Products”, HBR,

Posted in Uncategorized | Leave a comment

There’s lies, damn lies and statistics

graph of spurious correlation

Clear correlations “proven” statistically.

There’s a reason I stick with my Twitter feed. It brings up articles that I otherwise would have missed. Take these two.


It’s an interesting re-analysis of a climate change article by Lewandowsky, Gignac, & Oberauer, (2013) presented by Dixon and Jones (2015), who concluded: “respondents convinced of anthropogenic climate change and respondents skeptical about such change were less likely to accept conspiracy theories than were those who were less decided about climate change.” This was in response to Lewandosky et. al. who had shown a robust linear relationship between climate change rejection and conspiracy theory ideation.

The rebuttal by Lewandowsky, Gignac & Oberauer (2015) seems pretty compelling. The essence of the argument is methodological. When do you choose the statistical model to use and believe? The main point that Lewandowsky make is:

“Dixon and Jones’s core argument is that the relationship between the two variables of interest, conspiracist ideation (CY) and acceptance of climate change (CLIM), is nonlinear, and that the models reported for both surveys were misspecified. To reach their conclusion, Dixon and Jones first make three questionable data-analytic choices to cast doubt on and attenuate the linear effects reported, before they purport that there is nonlinear relationship after reversing the role of the variables of interest in the statistical model for the panel survey. No statistical or theoretical justification for that reversal is provided, and none exists.”

So you can choose a different model but if you do, you better have a compelling reason for doing so. Dixon and Jones didn’t.

Hence, Lewandowsky et. al, conclude:

“In summary, Dixon and Jones’s analysis has no bearing on the results we reported for either survey because it reaches its main conclusion only by reversing the role of criterion and predictor without any theoretical justification. The only statistical justification offered for that reversal (“with nonlinear models, it is important to explore relationships in both directions”) demonstrably does not apply. Without that reversal, Dixon and Jones’s criticism involving nonlinear relationships is moot because none are present.”

The main point was elegantly stated a bit earlier in the article.

“Any correlation matrix can be fit equally well by more than one model. This issue of equivalent models has been discussed repeatedly (e.g., Raykov & Marcoulides, 2001; Tomarken & Waller, 2005). The consensus solution is to limit the models under consideration to those that have a meaningful theoretical interpretation (MacCallum, Wegener, Uchino, & Fabrigar, 1993). Alternative models should reflect alternative theoretically motivated hypotheses, any mention of which is conspicuously lacking in Dixon and Jones’s Commentary.”

There are lies, damn lies and statistics. If you are going to use statistics wisely you better have a good theoretical model on which to base you proposed analysis. In the absence of that any conclusion drawn is suspect.

J.B.S. Haldane, “[T]he Universe is not only queerer than we suppose, but queerer than we can suppose.”

Sorry that at least the second response article of these two is proprietary rather than open access. If you have access to a library that carries Psychological Science here are the relevant citations:

The open one with a CC BY NC is:

Ruth M. Dixon and Jonathan A. Jones. Conspiracist Ideation as a Predictor of Climate-Science Rejection: An Alternative Analysis
Psychological Science May 2015 26: 664-666, first published on March 26, 2015 doi:10.1177/0956797614566469

and, this one in rejoinder is behind a pay wall. 😦

Stephan Lewandowsky, Gilles E. Gignac, and Klaus Oberauer. The Robust Relationship Between Conspiracism and Denial of (Climate) Science. Psychological Science May 2015 26: 667-670, first published on March 26, 2015 doi:10.1177/0956797614568432

Posted in Uncategorized | Leave a comment