John Perry Barlow’s List of Adult Principles

Pic of John Perry BarlowIn honor of John Perry Barlow’s passing I’d like to repost these adult principles that he wrote for himself when he turned 30 years old. He was surprised when he reached 30, spurring him to reflect on what it meant to indisputably become an ‘adult’. His response is as simple elegant and resonant as his lyrics in music and his believe in the power of community.

— pdl —


1 Be patient. No matter what.

2 Don’t badmouth: Assign responsibility, not blame. Say nothing of another you wouldn’t say to him.

3 Never assume the motives of others are, to them, less noble than yours are to you.

4 Expand your sense of the possible.

5 Don’t trouble yourself with matters you truly cannot change.

6 Don’t ask more of others than you can deliver yourself.

7 Tolerate ambiguity.

8 Laugh at yourself frequently.

9 Concern yourself with what is right rather than who is right.

10 Try not to forget that, no matter how certain, you might be wrong.

11 Give up blood sports.

12 Remember that your life belongs to others as well. Don’t risk it frivolously.

13 Never lie to anyone for any reason. (Lies of omission are sometimes exempt.)

14 Learn the needs of those around you and respect them.

15 Avoid the pursuit of happiness. Seek to define your mission and pursue that.

16 Reduce your use of the first personal pronoun.

17 Praise at least as often as you disparage.

18 Admit your errors freely and quickly.

19 Become less suspicious of joy.

20 Understand humility.

21 Remember that love forgives everything.

22 Foster dignity.

23 Live memorably.

24 Love yourself.

25 Endure.

Thank You JPB.

Posted in Uncategorized | Leave a comment

The gig economy and STEM work

The impact of the gig economy is pervasive, including technical and scientific work. A recent publication in Nature highlighted some of the impacts that those graduating with Ph.D.’s face seeking jobs and their role in the emerging independent freelance workforce. The growth of this kind of work is in part a consequence of the continuing dilemma that faces those who pursue a research career thinking they’ll enter the academy as academic researchers, but finding the doors of the academy narrow if not largely closed.

Here’s a summary of that Nature news article.

Article Citation: “Flexible working: Science in the gig economy”
Roberta Kwok, Nature 550, 419–421 (19 October 2017) doi:10.1038/nj7676-419a
Published online 18 October 2017

The article highlighted a number of different people in different roles with respect to the scientific gig economy.  The two below are relatively recent Ph.D.

Caline Koh-Tan: The freelance science consultant

Freelance scientist, Singapore.

PhD in cardiovascular genetics at the University of Glasgow, UK. 1st postdoc cardiovascular research; 2nd postdoc veterinary sciences


  • academic-proofreading projects
  • advice on methods, tests and anything involved with running a lab
  • contracted to write guide to understanding chemotherapy for patients
  • freelance genomics consultant to startups

“I also put data mining and biostatistical-analysis skills on my profile. But about half of the job invitations so far have been from students looking for someone to do their class assignments. I will not accept those jobs.”

Rate: US$30 an hour, but flexible

Cecile Menard: The part-time freelancer

Independent land-surface modeller; research associate, University of Edinburgh, UK; and member of a small virtual research organization for freelance scientists.

You can register as a freelancer for tax purposes online (

Observation: Stressful for those needing/wanting steady income

Issue:having access to research instrumentation & computational resources e.g., supercomputers. This can be managed by building, leveraging professional networks.

Working three days a week on a project to reduce uncertainties in snow models,. Complements that with two days a week for freelance work.

Part-time job now considered a “safety net”…..

Take aways:

  1. research careers have been affected by ‘projectification’
  2. online-economy practices are spreading into conventional employment.
  3. routine research tasks are being outsourced -e.g., categorization; database clean up; data set construction, validation; data wrangling/scripting
  4. in parts of the UK, Europe and Scandinavia 1 person in 40 is getting more than half of their income from online crowd-work platforms
  5. In the US 16% of the workforce us engaged in the gig economy (gig work plus independent contractors, freelancers, on-call workers, temporary-help agency workers or people who are contracted out in their main job) up from 10% ten years ago (Lawrence Katz, Harvard, Alan Krueger, Princeton, economists)

Bottom line: We need new institutions to provide workers’ compensation or unemployment insurance, with independent workers paying into a fund that they could draw from in down times.  This is a call for entrepreneurs to invent the social benefit structures needed for the 21st century economy

Posted in Uncategorized | Leave a comment

Saturday Morning Reading


Image credit: Susan Murtaugh, Phil reading on iPad, CC BY ND

Saturday mornings are a time when I sit down with a cup of coffee and do some ‘lateral reading’. What does that mean? I have some initial ideas of where I want to start reading but I then follow leads, links in Twitter, etc. to wherever it takes me. I periodically ‘reset’ back to the topic list that is in my ‘todo’ list but it’s dialectic between curiosity and projects on my mind.


I thought I’d trace the patterns of this as I’m curious about what others do when they are sitting down to recharge, explore and move some of your project work forward.

cartalk_logoSaturday morning lateral reading usually lasts until the early afternoon. It begins after a light breakfast (bagel & fruit usually) while listening to NPR  Weekend Edition followed by Car Talk Classic. My usual routine involves reading the news on my iPad (NYT, The Guardian, WP), checking what’s new on Twitter and FB, reading selected journal TOCs news bits and depending on the research articles an  article or two (Science, Nature, CACM, Psychological Science in the Pub Interest, etc.).  Then the todo list emerges to vie for attention.

Yesterday went something like this. I read the news sources that are my routine goto sources for the events of the day. While eating breakfast and going through the stories I listed to a caller on Car Talk asking why, when she was driving on a rural road in thunder and lighting storm, a lightening strike hit in front of her on the road she was driving and didn’t hit her car. She thought her car, being a metal box would have been the more likely target.  This led initially to a discussion about lightening strikes in general and the direction they strike (from clouds to earth or earth to clouds).

Ray said he though it was actually earth to cloud in direction which Tom, with his infectious laugh thought was ‘bogus’. This led to some banter for a bit and prompted me to look up information on the formation of lightening and it mechanism of discharge via a search using Chrome leading to the Earthscience Stackexchange site. There a really well written post by Vikram (4-14-14) described the process by which from the cloud end the build up of negative charges (electrons) coupled with a comparable increase in positive charges on the ground reaches the point where the cloud end releases a burst of negative particles that move earthward in stepwise pattern. They advance 50-100m, pause about 50 μsec and branches again searching for the path of least resistance toward ground. When they get close enough to earth the positive side arcs toward the nearest of these branching negative particle stepped leader.  When they connect, they have completed a path of least resistance and the ensuing flow transforms them into a plasma, generating enormous heat (50,000 Kelvin) and enabling the positive and  negative particles to flow toward their opposite polarity. The short answer is that the flow happens in both directions, but the flash we see as lightening is actually going from the ground to the clouds.

cacm_sex_algorithmAt that point I settled into my reading chair (a stressless chair with ottoman and a swing arm computer tray by a window, and picked up last month’s Communications of the ACM (CACM) which was bright pink and on the cover was the screaming headline “SEX is an algorithm”. Scanning the ToC I checked out what was inside and where I wanted to focus my time.  I was attracted to a couple of Viewpoint articles. I started with “Technology and Academic Lives”, by Jonathan Grudin, a Principal Researcher at Microsoft Research in Redmond.

The article was about rise of ‘busyness” in higher ed, that is, the sense that most of us have, along with what I suspect everyone else in the western world, that our days are increasingly packed with more and more stuff to do in less and less time. All of this contributes to the loss of concentrated thinking time spent discussing things with local colleagues. Local is the key descriptive term here. Grudin divides the past 46 years in to four time periods.

  • 1975 – Pre-internet era
  • 1979 – Pre-web era
  • 1995 – Early web era
  • 2015 – Information Age

It was a nice read that followed the theme of increasing computer communications inversely proportional to decreasing social f2f interaction. The pre-internet era by necessity meant that your primary intellectual stimulation and challenges emerged form discussions with your departmental colleagues and graduate students.  With internet (pre-web) the opportunity to connect with researchers in your specialty or sub-specialty around the world led to substantive discussion of your work focused among this distributed community. And with that, a cost in your social connectedness to your local community and even to some extent your grad students.

The 1995 period was marked by recommendations for new hires primarily based on external letters from people most in the department had loose or no ties. This was coupled by the continuing diminishment of one’s local community in relation to one’s research.

By 2015 data has proliferated to the point that an obsession with quantification emerges. Polarization increases between those who are and aren’t quantitatively focused. This is coupled by a sharp rise in assessing the impact of one’s work. The focus on good teaching has been replaced by a rise in the importance of avoiding bad teaching. Raising money has grown in significance making grant getting a priority and diminishing the stature and voice of those not as successful or interested in that side of the profession.

In summary the current status quo is marked by:

  • increasing importance of fund raising;
  • increasing significance of rankings;
  • specialization narrows interests;
  • collaboration across distance accelerates scholarship and discovery
  • distributed research teams comes at the cost of  local community and increase in weak ties

That rings true to my experience. Close knit local research communities are a thing of the past.

Grundin ends suggesting we think about new forms of interaction and assessment that are less impersonal and stressful. He uses the analogy of the martial art of Aikido where the forces focused on you are redirected to achieve positive outcomes and retain balance. Malcolm will like this reference.

Next up was a n article by Pat Helland entitled “The Power of Babble” about the proliferation of metadata and standards.There was a nice quote there from Dave Clark (MIT) about successful standards happen only  where they are lucky enough to slide into a trough of inactivity after a burst research and before a huge investment in productization.

Systemic changes in large computing systems require translation between two data representations, and that’s likely to be “lossy”. Often one builds a canonical representation that the old system needs to be converted to and then from that converted again to the new data structure.  That’s doubly “lossy”.

The article for me fell down at that point as Helland calls for simply becoming more relaxed about what you don’t understand accepting with pleasure becoming befuddled.

Next up were a couple of research in practice articles on distributed consensus systems, and the Paxo, Chubby Lock and Raft algorithms. The latter was referred to as “Paxo for humans” (:-)

A contributed article in CACM looked interesting on Spark: A Unified Engine for Big Data Processing. That was a harder read and


Image credit: William Starkey on Georgaph, CC BY NC SA

Finally from a tweet that came in toward the end of this a video interview by Steve Wheeler (from Plymouth University) with Yves Punie who keynoted the EDEN conference in Europe and spoke about digital competencies needed by learners and citizens in society today.

Punie described 21 digital competencies clustered in to five clusters:

  1. Understanding digital information, its authority and its critical evaluation
  2. Communicating in a digital world, learning how to collaborate and share
  3. Becoming facile with digital content creation, both as individuals and groups
  4. Understanding issues of safety, privacy, health and well-being in the cyberspace
  5. Digital problem solving including reflecting on what problems need to be solved

Punie noted that a recent survey of employers in the EU reported 37% of workers don’t have sufficient digital skills to do their jobs. This he indicated is a failure by the companies not providing professional development and training and in educational institutions not graduating digitally prepared workers or socially constructive digital citizens.

This was followed by reading some a paper about Embracing Confusion: What Leaders Do When They Don’t Know What to Do (Phi Delta Kappa) and some email.

That is probably representative of my morning-early afternoon work.  Sunday was more of the same, with more attention to my Todo List.

What do you do for lateral reading?


Posted in informal_education, interdisciplinary_learning, Uncategorized | Tagged , , | 2 Comments

Education 2020


image credit: Dragan CC BY

Post Factual Times of Magical Thinking –Dec. 14, 2016

[This is the tag line that will begin writings that fall into the strange world after Nov.7th, 2016.]


How do you address the possible futures of education in 2020?  We have a hard time figuring out what’s going to be happening next quarter, let along next year, or four years into the future. This was the task I was given by Campus Consortium for a webinar that took place today.

My preparation was some days of personal reflection, engaging in some email back and forth with colleagues, and reading.  I had 20 min after which there was some Q&A. It’s an odd format to do webinars. Depending on the platform your ability to have any sense of the audience at all is highly variable. In this case I’m told there were 112 logins from all over the world. Not bad, though perhaps not up to my colleague Bryan Alexander’s Future Trends Forum lofty standards 😉  After the presentation some questions from the chat room and some audio questions were fielded and I tried to address them.

Below are the talking points that I used to guide my speaking.


image credit: mayeesherr. future CC BY


  • Post-course Era & Inter-disciplinarity – Problems of today are solved within disciplinary boundaries. This will lead to an increase in inter-disciplinarity of the formal learning experience.
    • We are seeing colleges, schools and collections of departments being reorganized around “challenges”. ASU is a prime example. They might be simply re-instantiated as new business units but the dynamic nature of what are challenges worth addressing will change with much greater fluidity than what we’ve previously had in terms of disciplinary categories. In effect, the infrastructure of units and more importantly the membership of the items or elements in them is an aggregate property defined by the collection or resources and people that have that designation. It’s the inverse of the re-factoring underway today of the learning environment where courses are an emergent characteristic of the individual students who select a topic of study, not a bucket into which students are poured.
    • The emergence of interest in systems like Salesforce is driven in part by the realization of the learner as the organizing principle of university systems, learning or otherwise. The “course” as the organizational unit of learning is fading. It’s still important to be able to have that lens available, but it will no longer be a fundamental building block of the architecture. LMSs that don’t figure this out soon will be relics.
  • Academic Learning/Post-graduation Earning – Institutions, particularly public institutions, have increasing pressures to demonstrate value and accountability. This is leading to the pressure for greater clarity in the connection between the academic learning experience and the collection of capabilities that are developed which map into productive working & earning opportunities post-graduation – this is in the context of the pressure toward the ‘gig’ economy that will be met and shaped by the concerns for social well-being too often sacrificed in this trajectory. Where does this lead? It leads to a reversal of what we have called the ‘hard’ vs. ‘soft skills’
    • It also leads to a recognition that an individual learner must be considered a part of the institution’s student body, if you will, from the time they enroll and continuing for the rest of their lives. Transitioning their role from undergraduate student to ‘alumni’ may make certain marketing sense, but their increasing need to top up their skills, expand their capabilities with recognized certifications or even new degrees, means we need to treat them like core members of the learning community who simply have different tags associated with their current lifecycle status. That might be what we think we’re doing today, but the ease with which these individuals can transition in roles, participate in on-going learning opportunities of varying duration, with and without and accreditation will challenge this notion.
  • Recognition of Learning Achievements (RLA) – learning happens in many places, and in many contexts, not just the classroom. We know that, but we have

    image credit: Phillip Long, Bolonga_University3 CC BY NC SA

    failed to recognize it in sharable, transportable ways. The rise of micro-credentials backed by metadata developed from the badging world provides a pathway towards an “Open Architecture for the Recognition of Learning Achievements”.Behind this is the drive toward extended transcripts and various forms of recognition of achievement collectively referred to as badges, a synonym for the representation of micro-credentials. Like all of these activities, there is a technology component and an even larger instructional delivery and faculty culture components

    • Core elements RLA are:
      • the description of the learning outcome,
      • the rubric by which the achievement is judged or assessed, and
      • the evidence that the learner submits by which the rubric is applied.

      Extending the transcript is in effect transparently giving some insight into the decision rules and evidence of how the summative score or grade was actually determined, in a way that an independent outsider can understand and reasonably judge. Linking to this data is what the extended transcript is all about, and badging systems provide a ready infrastructure to accomplish this, needing only attention to integration.

      • The portability of this record of achievement in the future will be a major issue. Workers are working on average 4.4 years before changing jobs. They will work in something like 15 or more different jobs over the course of their working lifetime. Having to come back to every institution from whom they’ve earned a degree, certificate, CEUs, CMEs, etc., is a nightmare and can’t stand.
      • Enter the blockchain….
  • Growth of Learner Agency – Learners need to build their knowledge, literally and figuratively to be successful across their lifespan. To achieve that institutions will need to provide more integrated and connected experiences that enable students to ‘do the discipline’ instead of either hearing about what the discipline is, or listening to what others have done in it. The results of their achievements need to be associated with the learner, not solely the institution. This is in alignment with greater independence of the future work environment, and the need to construct their view of themselves and their learning achievements to employers and collaborators. It’s absurd that the demonstrating one’s achievements today requires contacting every degree, certificate, and learning or professional program to have those entities send ‘authentic records’ of your learning achievements to potential employers. As mentioned about given today’s average job duration of 4.4 years, this is crazy.

image credit: Phillip Long, CC BY

  • Continued advance & Ethical Challenges of Big Data and Analytics – there is no doubt that the computational capability to analyze big data is just beginning in higher ed. It’s really not “big” in comparison to astronomical data, nuclear physics, or economics. But it is qualitatively large step up in terms of educational data sets.  Serious concerns will need to be met and addressed in terms of privacy, security, and the ethics of the use of this big data. IMSGLOBAL Global Learning Data & Analytics Key Principles.
    • These principles include clarity of ownership of the data of learners. A challenge to many institutions is the assertion that learners own the data generated in the course of interacting with university systems. This a challenge because we act like the institution owns it, but we often say the student or learner owns it. Ownership without the ability to do anything to the data, however, is meaningless.
    • Other principles include
      • stewardship
      • governance,
      • access,
      • interoperability,
      • efficacy,
      • security and privacy, and
      • transparency
    • Team-based Course Development & the Learning Engineer: The collaborative design and development of the technology mediated learning experience is becoming an essential element of group course development and design. Whether in the digital surround to the residential learning environment or a more fully online distributed learning environment the demands of the design process are creating the need for the role of the “Learning Engineer”.
      • This change in design practice is predicated on the recognition that the role a faculty can only be stretched so far. It’s less and less realistic to believe an instructor can be the domain expert in their discipline, a productive researcher in that domain, an instructional designer, a UI expert, a learning scientist and a dynamic presenter. People may have many of these attributes but having them all is unreasonable to assume and difficult to find in practice.
      • What does that mean? It means functions need to be segregated into roles that support the faculty. One of the roles is the learning engineer – that is someone with the multidisciplinary skills of learning sciences, cognitive psychology, learning design along with the computational skills to bring these to a digital learning environment.

image credit: Andrés García, Isolation, CC BY NC

  • Personalized Learning & Social Context – A trend is emerging to meet the learner where they are, not in the mythical median represented by the average student. The ability to gather data, analyze it increasingly in real-time frameworks to provide relevant timely and predictively guided personalized learning pathways is both a holy grail and a chimera. It is appealing to provide desirable difficulties that are framed by the strengths and deficiencies of the learner’s current mind state but we have evolved over tens of thousands of years to be exquisitely social creatures. We have to retain and emphasize the social dimension of learning even in distributed online and so-called personalized learning environments.
    • The challenge here is personalization without isolation. Technology must expose and allow learners to be aware of where other learners are in their learning journeys and facilitate ways for ad hoc group formation that allow peer interaction and study to occur in the context of the intersection of their personalized journey with others.
  • Rise of Openness – the expansion of “open” is now moving beyond its roots in open source software and encroaching on open access (journals/publications), open science, open data, open educational resources and textbooks/publishing, more generally open scholarship. What is emerging is that transparency is an essential element of advancing knowledge, and the network effects of open sharing accelerate discovery, innovation and progress.  This is not a battle between commercial practices and open sharing. It’s about leveraging the two for sustainable strategies that leverage the power of “open”.
  • Exploitation of open: security and identity in an age of evil actors – this is the converse and threat to the power of open. This both a technical challenge and even more a cultural challenge. Protecting one’s identity and avoiding data theft has gotten much harder with the advent of the sloppiness in design of rushed to market IoT devices. The latest exploitation in the DDOS attack on Dyn exposes the fragility of our online infrastructure. Universities can continue to lockdown their services and built virtual moats around their campuses, or they can integrate more sophisticated defenses into the devices that connect them while remaining engaged with the world.

Some examples of technologies in support of these future trends: (these are NOT endorsements but illustrations)

Image | Posted on by | Tagged , , , | Leave a comment

AI applied to VA de-identified healthcare records

Yesterday, Nov. 29th, Flow Health announced a five-year

human artificial intelligence informs healthcare

AI used to find complex patterns in medical symptoms and treatment outcomes photo credit: Flickr – A Health Blog,, CC: BY SA

partnership with the US Department of Veterans Affairs (VA) to build a medical “knowledge graph” using AI to inform medical decision-making and train AI to personalize care plans.

This is a big data project that will examine the millions of VA records and look for associations between presenting symptoms and interventions with respect the outcomes that followed.

One has to be extremely careful here because it is simply looking a correlations. Causality is another thing all together. But it might reveal patterns that were just too indistinct without the ‘magnification’ of big data analytics to surface relationships. Follow up studies will be required to establish whether these patterns are meaningful. Still, it’s an area of promise that is possible with todays computational environment.

Posted in Analytics, big_data | Tagged , , , , | Leave a comment

Blockchains as a fresh angle toward centralizing power and wealth?

colorful blue blocks

Image credit: Philip Bouchard CC BY NC ND


Recently a post from Manuel Oretga in the blog Las Indias in English titled “The
blockchain is a threat to the distributed future of the Internet
” attacked what he sees as thinly veiled corporate centralization of the internet through it’s current darling the blockchain . Bitcoin, the initial target, is a mechanism by which those interested in centralizing power, control, access & wealth wield economic might. Big banks and those aligned with them, entities he refers to as “centralizers” (with a link to IBM’s blockchain finance work as an illustration of the definition) are building dependence on heavy weight infrastructure, a synonym for centralized industrial/corporate activity. Dependence on industrial infrastructure thwarts those seeking in the internet independence and autonomy.

Tiangong_Kaiwu_Coal_miningThe initial evidence of a centralizing control property of blockchains is the Bitcoin reliance on mining functions, primarily taking place in China, that create the coins that are the currency of Bitcoin exchange.

This is easy to verify when you look at the way that two Chinese “mines,” Antpool and DiscusFish/F2Pool, hoard more than half of the blocks created by the bitcoin blockchain

That this has emerged in the Bitcoin environment means that this method of financial exchange is controlled by those who have large amounts of capital and can invest in the infrastructure required by Bitcoin transactions. The permissionless distribution of blockchain transaction records to all participants masks the reality that it is just another centrally controlled system directed by those with the capital to create the currency.

The use case that is explored to validate these assertions is an application called Twister, a P2P microblogging platform that uses Bitcoin’s blockchain infrastructure.  There is a odd lead paragraph that introduced Ethereum and with it the notion of  ‘smart contracts‘, but it’s only tangentially related to the arguments that follow which instead focus on Twister which is using native Bitcoin software.  Apparently because they both use some form of blockchain that’s enough to tie them together and impugn Ethereum based on the Twister critique. That’s like criticizing Oracle based on critical analysis of MySQL because they are both using some variant of relational databases.  It’s obfuscation that isn’t germane to the argument. That happens quite a bit in looking at the citations offered (e.g., the aside below).

ASIDE – A quirky reference to ‘corporate developments’ leads the paragraph introducing Twister. The aside looks at that but it’s tangential to Ortega’s primary argument so you can skip it if you wish by not clicking on the link.

It’s important to recognize how Twister is using the blockchain.  It’s not what you might initially think. Twister is an alternative to Twitter. The blockchain is focused on establishing an immutable user name. Why? Because of a concern that people can masquerade as someone. The developer of Twister writes in an FAQ entry

“Therefore this other peer may try to deceive you by providing forged posts from other genuine users or to refuse to store or forward your own posts.”

The mechanism it prevent the forged posts concern is making the twister client check if each post is properly signed by the sender. And that is role cryptographic feature of the block he’s leveraging.

Ortega’s main concern, however, is that Twister’s use of this method requires that the blockchain ledger be transferred to your machine, since Bitcoin’s blockchain is permissionless, and every users gets the full history of transactions – in this case the full list of Twister users, encrypted of course.  Ortega’s concern is that his requires a lot of bandwidth (well, not now as the service is small, but if it took off it could).  The assumption is that having both the storage required to support this as well as the bandwidth to engage are both requirements that are “insurmoutable barriers”, barriers high enough to disenfranchise the average punter but trivial for the corporate giants of the likes of Google, Amazon and IBM.

Ortega’s concerns boil down to three points:

  1. Using blockchains requires bandwidth that is only easily accessible by big corporations, in terms of affordability and physical accessibility to the bandwidth.
  2. The size of the blockchain places too high a storage burden on the user, whilst being trivial to the corporation players.
  3. The distributed consensus algorithm of the Bitcoin blockchain is still subject to the 51% attack problem, if not directly in controlling commits then in harboring Bitcoins themselves and thus controlling the function of the transaction environment. It’s also energetically & computationally expensive


I’m less concerned about bandwidth in this instance as this is not specifically a blockchain problem. It’s an overall internet access problem. Of course that doesn’t mean people using the internet and needing bandwidth for whatever they’re doing aren’t affected. It’s also true that bandwidth follows development so poorer areas by and large have less bandwidth.

What it does mean is that one particular usage of bandwidth is not to me the entry point for solving a much larger internet access network issue. There are other directions more likely to motivate changes there – like healthcare. This is an issue in the same area as most inequitable wealth distribution problems.  It’s important but requires multi-faceted strategies to address.

Blockchain Size

Local storage capacity is a major concern for the millions of users and potential users around the world who don’t have inexpensive access to storage at a price that’s affordable.. But like Bandwidth, Blockchains are just on of many hundreds of applications that demand more storage capacity.

The Twister example that Oretga focuses on is a peculiar choice as a use case. The use of the blockchain there is for the purpose of establishing immutable user IDs so that no one can masquerade sending microblogging messages as someone else. The size of the blockchain in this instance is by design small. It’s hard to see this imposing an burden on the users of this P2P microblogging platform.

Ultimately I don’t think you build toward the future by constraining your creative solution space to what are current limitations. Don’t get me wrong. We have to address the forces that are retarding or unwilling to address access, bandwidth, and affordable storage systemically.  But leading the charge for that with blockchain storage requirements rather than the hundreds of other storage demands seems like an ill conceived strategy.

What it does question is exactly what we deem as essential to encrypt in the block itself. That’s an important question. I see future blockchain environments as hybrid solutions with the information written into a block guided by a minimalist design guideline, using URIs written into the block to point to secondary locations where information dense artifacts related to the block are stored.  That certainly opens up other places where attacks on the integrity of the system might be targeted, but that’s not a new problem.

There are examples from Monegraph & Everledger where data relevant to a block record is stored in places other than the blockchain itself. This is likely to be  a smart implementation move even though it does add complexity and opportunities for new points of attack.  The goal should be to put in the immutable block only that information sufficient to make a unique record that permanently records the event you’re trying to recognize. Blockchains are not simply a new form of database intending to replace the RDMBS.

3. Distributed Consensus

This is perhaps the most serious criticism of the use of the blockchain in the context of credentialing  or recognition of competencies. There is no need for the kind of Proof of Work (PoW) that is an integral part of the bitcoin cryptocurrency environment. Nor is it acceptable to predicate consensus in committing a record to the ledger on enormous expenditures of energy and raw computational power. Whether we limit the number of blockchains and devalue the ‘currency’ in an agreed fashion to bound the range of the investment required to achieve the right to create a block, or more likely we look at some of the emerging algorithms that are based on Proof of Stake (PoS), the current bitcoin PoW is consensus algorithm needs a suitable replacement.


Image credit: Earl McGehee,, CC BY NC ND

Image credit: Earl McGehee,, CC BY NC NDlearning. More on that in another post.

In the UT Austin pilot, development starting this summer, we’re beginning our explorations using the Ethereum environment, but looking at altering the block creation process to limit a block to a ledger record. There is more to our pilot as it involves badges and writing badge metadata into the block ledger, a databases for structured rubrics and a database for rich media (effectively a bag of bits S3 store) to capture different artifacts related to evidence of learning. More on that in another post.


Posted in academic_transformation, badges, blockchains, CBE, higher_education, innovation, UT_Austin | Tagged , , , , | Leave a comment

Inaugural Leadership Roundtable on Academic Transformation, Digital Learning, and Design: Towards The Creation of a Discipline?

I was privileged to attend a gathering recently at Georgetown University to talk about the creation of a new academic discipline around learning design.  What follows are some reflections from that stimulating meeting.

Image credit: Tony Brooks, Georgetown_NonHDR, CC BY 2.0

Image credit: Tony Brooks, Georgetown_NonHDR, CC BY 2.0

There is a flurry of work going on rethinking the space of learning technology and its role in designing learning experiences, conducting learning sciences research, and continuing or expanding the delivery of core services (e.g., video production, animation, and increasingly VR/AR experiences in 3D immersive visual spaces.

Examples include:


But the general result has been the same in both k12 classes and in higher ed – not much has changed. Instead of a tool for learning it has had to make the case that the technology eases the instructors job. Otherwise adoption is thwarted.[1]

There are tensions on a number of fronts. Efforts at the University of Michigan led by James DeVaney, Assoc. Vice Provost for Digital Education intend to gracefully ‘go out of business’. They want the integration of digital tools so pervasive that it no longer needs to be called out. And the location to which they refer is the academic departments.

The vision put forward at the Leadership Roundtable on Academic

Transformation, Digital Learning, and Design attempts to address academic department ‘ownership’ of ed tech research and applied innovation. In this case it involves establishing a new academic discipline all together rather than embedding it as a program in an existing discipline like Ed Psych, or Instructional Technology, where these and related disciplines tend to find homes in Schools or Colleges of Education – something that GU doesn’t have. That may well be a unique opportunity.

But there are cautions nonetheless. The proposal on the table, written by Prof Eddie Maloney, Executive Director of the wonderful organization the Center for New Designs in Learning and Scholarship (CNDLS, founded by Prof. Randy Bass many years ago), emphasized correctly that real innovation happens at the boundaries, of disciplines, research methods, or theoretical models. Instantiating a new graduate program in a department of Learning Design adopts the model of the academy that has served for hundreds of years.

Years ago Seymour Pappert wrote Why School Reform Is Impossible[2], and it in describes a realization that he had come to that “”Reform” and “change” are not synonymous”. Granted Papert’s focus is again on k12 but I don’t think it wise to dismiss this too hastily. He writes about “”assimilation blindness” insofar as it refers to a mechanism of mental closure to foreign ideas” and refutes Roy Peas conclusion that LOGO failed to live up to Papert’s predictions. Papert notes that the ‘grammar of school’ is a deep belief structure that is exceptionally difficult to dislodge and disrupt. It’s rather like the underlying philosophy of teaching and learning that all faculty have, whether self-aware of it or not.

Papert wrote,

Complex systems are not made. They evolve…. education activists can be effective in fostering radical change by rejecting the concept of a planned reform and concentrating on creating the obvious conditions for Darwinian evolution: Allow rich diversity to play itself out”.

What has me thinking is how do we enable the continuation of the creative, messy, but productive interplays at the edges of different systems? Are we sacrificing what makes the potential here so large by becoming another department in the contemporary academic higher ed institution? Does playing inside the square (a play on the Aussie phrasing) diminish our potential to change the organization, especially when we really don’t know the real details of the outcomes we seek? What purpose is this proposal serving? Is it a search for internal legitimacy? What agenda(s) will it enable? What risks are accompany the approach and what opportunities exist to mitigate those risks?

As a post-script I’m pleased to say that the presentation Eddie made to the GU curriculum committee for a new Masters in Learning Design was approved.  We will see in the coming months/years how this grafting of service and applied research through new form a hybrid academic department matures and impacts its surroundings.

One things is sure.  The community that has begun to form around it is rich, rewarding, and intellectually stimulating.  It’s a plus when that’s complemented by deeply generous and open people.

[1] Why Ed Tech is Not Transforming How Teachers’ Teach – Education Week, June 11,2015,

[2] Papert, Seymour (1999), “Why School Reform is Impossible”,  The Journal of the Learning Sciences, 6(4), pp. 417-427, last accessed 5-5-2016,


Posted in academic_transformation, higher_education, innovation | Tagged , , | 1 Comment