Instigating Innovation: Accelerating Experimentation in industry

When innovation centers, technology transfer centers, applied research platforms and other similar organisations want to help industry with innovation, one way could be to assist companies to experiment with new ideas. I will simply refer to these centers from here onward as innovation and technology support centers. In most of the places where I work these centers are often hosted by or associated with universities, applied research organisations or with technology transfer organisations.

One way to support industry to experiment is through various technology demonstration-like activities, allowing enterprises access to scarce and sophisticated equipment where they can try new ideas. In its simplest form, facilities allow companies to order samples to a certain specification, allowing a company to see whether a particular process can meet a specification or performance criteria. A slightly more intensive form of tech demonstration allows in visitors and a technology and its application is demonstrated (eyes only, no touching!). Very often equipment suppliers play this role, but in many developing countries equipment suppliers behave more like agents and can not really demonstrate equipment.

In Germany I saw demonstration facilities where the pro’s showed the enterprises how things works, and then they stood back allowing teams from a company to try things themselves.

A critical role of innovation support centers is to provide industry with comparative studies of different process equipment. For instance, in an innovation center supporting metal based manufacturers, providing industry with a comparison of the costs and uses of different kinds of CAD systems could be extremely valuable to industry.

Maker labs, Fablabs and similar centers all make it easier for teams that want to create or tinker with an idea to gain access to diverse technologies, reducing the costs of experimenting. However, the range of equipment in these labs are often not so advanced, but it can often be very diversified. In my experience these centers are very helpful to refine early idea formation and prototyping. However, to help manufacturers experiment with different process technologies, different kinds of materials, substitute technologies, etc. is the a binding constraint in many developing countries. The costs of gaining new knowledge is high, and due to high costs of failure, companies do not experiment.

Innovation support centers must be very intentional about reducing the costs of various kinds of experiments if they want manufacturers, emergent enterprises and inventors to try new ideas. These innovation centers can play a role by:

a) assisting companies to internally organize themselves better for experimentation internally

b) assisting many companies to organize themselves better for experimentation collaboratively

c) conducting transparent experiments on behalf of industry collectives

In my experience, graduates from science disciplines often understand how to conduct experiments because their coursework often involve time in a lab. They know basics like isolating variables, managing samples, measuring results, etc. However, engineering graduates often do not have this experience (at least in the countries where I am working most). For many engineering graduates, the closest they will ever get to an experiment is a CAD design, or perhaps a 3D printed prototype.

Therefore, it is necessary for a range of these innovation and technology support centres to assist companies at various hierarchical levels to experiment.

At the functional or operational level, organising for experimentation involves:

  • creating teams from different operational backgrounds,
  • creating multiple teams working on the same problem,
  • getting different teams to pursue different approaches
  • failing in parallel and then comparing results regularly
  • failing faster by using iterations, physical prototypes and mock ups
  • According to Thomke, results should be anticipated and exploited – even before the results are confirmed

At a higher management level, organising for experimentation involves:

  • Changing measurement systems to not only reward success, but to encourage trying new things (thus encouraging learning and not discouraging failure).
  • moving from expert opinion to allow naivety and creativity
  • Preparing for ideas and results that may point to management failures or inefficiencies elsewhere in the firm (e.g. improving a process may be hampered by a company policy from the finance department)

Getting multiple companies and supporting organisations to experiment together is of course a little bit harder. Management of different organisations have many reasons to hide failures, thus undermining collective learning. One way around this could be to use a panel or collective of companies to identify a range of experiments, and then these experiments are conducted at the supporting institution in a transparent way. All the results (success, failures and variable results) are carefully documented and shared with the companies. However, to get the manufacturers to use these new ideas may require some incentives. In my experience, this works much better in a competitive environment, where companies are under pressure to use new ideas to gain an advantage. In industries with poor dynamism and low competition, new ideas are often not leveraged because it simply takes too much effort to be different.

Promising ideas from experiments can be combined and integrated after several iterations to create working prototypes. Here the challenge is to help industries to think small. First get the prototype process to work at a small scale and at lower cost before going to large scale of testing several variables simultanously. An important heuristic is to prototype at as small as possible scale while keeping the key mechanical or scientific properties consistent. More about this in a later post. (Or perhaps some of the people I have helped recently would not mind sharing their experience in the comments?)

I know this is already a long post, but I will add that Dave Snowden promotes Safe2fail probes, where teams are forced to design a range of experiments going in a range of directions even if failure is certain in some instances. In my experience this really works well. It breaks the linear thinking that often dominates the technical and manufacturing industries by acknowledging that while there may be preferred solutions, alternatives and especially naive experiments should be included in the overall portfolio. To make this work it is really important that the teams report back regularly on their learning and results, and that all the teams together decide which solutions worked best within the context.

THOMKE, S.H. 2003.  Experimentation Matters: Unlocking the Potential of New Technologies for Innovation. Harvard Business Press.

 

Blunders, boo-boos and silly mistakes made on the fly

I am acutely aware that I often make grammar and spelling mistakes in my blogs. I do apologize about these.  I feel silly when I realize I made a mistake. I have no excuses. As my favorite cartoonist Hugh Macleod @gapingvoid put it, excuses are a disease.

excuses are a disease

The intention of my blog is not to write perfectly composed articles, but to share my thinking with a broader audience than just the small group of clients, collaborators and friends in my network that I get to work with on a frequent basis. Ask me about those perfect articles and books and I can tell you which ones to buy. I collect them.

Just like the practitioners and decision makers that I support have to confront clients, decisions and complexities without always having time to perfectly prepare; so I capture conversations, arguments or ideas developed with my clients – on the fly. The point is that in the field knowledge and ideas are not always perfectly described, neatly organised, thoroughly prepared. Sometimes the best explanations happen on napkins, flipcharts or a piece of paper.

brave as those who need us

The purpose of my blog site is to help the people I work with to explain some of these concepts on the fly. Hopefully they can do it shorter than it sometimes takes me, or maybe they can even do it more eloquent. Every time these concepts or thoughts are explained it becomes easier and easier.

I found it works best to write at my client sites, on the way home (on the plane, not while driving – yet), or between meetings – and then to post these articles before I start doubting the relevance of my ideas or the insight gained by explaining something to somebody (yes, most of my posts are based on real conversations with clients out there facing complex situations). I have a huge collection of articles written in the safety of my office, far from the coalface, that I have never published because they suddenly seem less than perfect or even insignificant. It is easy to feel challenged when I sit in my office surrounded by books written by articulate scholars. I wish I could say these scholars always inspire me to write, but that would not be honest. Sometimes they do. Especially when I can connect the different kinds of literature that I have collected over time. However, often this collection makes me feel discouraged. I just have to look at the amazing content my late friend Jorg Meyer-Stamer wrote on a wide range of topics to feel like I should rather not commit anything to the official record.

I assure my readers that when the text on these blogs make it into other publication forms I usually first get an editor to fix all those pesty grammar mistakes. 

I thank those of you that read regularly, those that share your ideas with me – even if you don’t agree with everything I post. Thank you for pointing out the mistakes, the inconsistencies or your disagreements with what I post. I especially want to thank those that also take the risk of sharing their comments on Linkedin or directly as a comment to this blog, because you also take the risk of making mistakes or feeling exposed. Please don’t stop. I won’t. 

Honest feedback

I have often considered stopping blogging, just like I often wanted to quit co-producing the LEDCast (more than 1,000,000 downloads now!!) on many occasions, also due to my challenges to say ‘s’ or ‘r’ when my tongue gets tied. Somehow the workarounds when I speak have made its way into my writing.

So as long as I receive your ideas, comments, notes, emails, tweets and calls I will keep on blogging.

 

 

Teaching on innovation systems – afterthought

The post about how I teach on the topic of innovation systems two weeks ago really elicited a much bigger response than I expected. The tips, ideas, confirmations and questions received inspired me to think how I can share more practical training advice. I have a lot to share, simply because I love teaching on a wide range of topics. True to my mental construct of an innovator, I constantly develop small modules that can be combined, re-arranged, shortened or expanded to meet the requirements of the teams I support and coach.

For instance, the innovation systems outline that I explained in this previous posts consists of two parts: Part 1 is made up of modules on innovation and technology:

  • Innovation, invention and different kinds of innovation,
  • Knowledge generation in enterprises,
  • What is technology? Definitions, applications and implications of various definitions,
  • Different kinds of competition and its effect on the innovative behavior of enterprises,
  • Knowledge generation in enterprises and organisations

Part 2 then builds on this foundation with topics central to the promotion of innovation systems, with modules on:

  • Knowledge generation, co-generations and assimilation in societies,
  • Defining innovation systems,
  • Role of different kinds of economic and social institutions in innovation systems,
  • The importance and dynamic of building technological capability,
  • Systemic competitiveness as a way of focusing meso level institutions on persistent market failure,

If needed it is easy to bring in many other topics such as:

  • Technological change, social change, economic change (based on the excellent work by Eric Beinhoecker),
  • Assisting stakeholders to embrace sophisticated demand as a stimulus,
  • Diagnosing value chains,
  • Technology transfer, demonstration and extension, and so on

Yesterday I was reflecting with Frank Waeltring about the order of these sessions, why in my experience Part 1 goes before Part 2 and how difficult it is to present part 2 without the basics of part 1 in place. We reflected on why it is easier to start with foundation topics on innovation and technology management, and thereafter moving to the more abstract content of innovation systems.

In my experience, development practitioners and policy makers often believe the link between the subjects of innovation/technology management and innovation systems promotion is the concept of “innovation”. Almost as if innovation happens in enterprises, and innovation systems is then the public sectors way to make innovation happen in enterprises. This logic is an important stumbling block that many people I have supported struggle with. In my book on the promotion of innovation systems I created the following table to explain the difference.

Difference between innovation/technology management and innovation systems promotion

The connector between these two domains is not innovation (despite it being common two the names of the two domains). It is knowledge. Not necessarily formal knowledge (more engineers & phds = more innovation kind of over simplistic logic), but various forms of knowledge. Tacit knowledge. Knowing of who to speak to. Being exposed to other people from different knowledge and social domains. The costs and ease of getting information from somebody you know or don’t know. Learning from your own mistakes and the attempts of others.

Some places, countries and industries get this right, others struggle. Trust is central. This dynamic takes time to develop. You can sense its presence way before you can figure out how to measure it. While many of these issues can be addressed at a strategic level in an organisation like a company (or a publicly funded institution), many of these kinds of knowledge flows are inter-dependent and can be accelerated by taking an innovation system(ic) perspective.

The conclusion is a real tongue twister: The connection between the body of knowledge of innovation/technology management and the body of knowledge about innovation systems development is the body of knowledge on knowledge and how it emerges, gets assimilated, absorbed and further developed.

That is why knowledge generation, learning by doing fits in so well with part 1, but why it is not complete if not also addressed in part 2, especially the systemic elements of knowledge dissemination and absorption. It is the bridge.

 

Significance over scale when selecting sectors

When promoting territorial economic development from an innovation systems perspective it is important to find ways of increasing the use of knowledge and innovation in the region. However, in mainstream economic development there is a tendency to target the private sector based on scale. This means that practitioners look at quantitative measures such as jobs, numbers of enterprises, numbers of beneficiaries, etc. when deciding where to do analysis and focus support. This is common practice in value chain promotion, sub sector selection, etc. Many development programmes do this as well prioritizing scale measures such as jobs, women, rural individuals, etc.

From my experience of assisting development organisations to strengthen the economic resilience of regional economies (which means more innovation, more experiments, more diversity, increased use of knowledge, more collaboration between different technological domains), I have found that the scale argument is distracting and too focused on the beneficiaries (whatever is counted) and not focused enough on those indirect public or private agents that are significant and that enable a whole variety of economic activities to take place. With significant I mean that there could even be only one stakeholder or entry point (so the direct scale measure is low) but by addressing an issue it enables a whole variety of economic activities to take place.

Of course, scale is very important when a local politicians need votes. It is also important when you have limited budget and must try to achieve wide spread benefit. For this reason scale is very important for social programmes.

However, when local institutions are trying to strengthen the local innovation system, in other words improve the diversity technological capability of a region, then scale becomes a second priority. The first priority then becomes identifying economic activity that enables diversity or that reduces the costs for enterprises to innovate, use knowledge more productively should be targeted. The reason why this does not happen naturally is that these activities are often much harder to detect. To make it worse, “significance” could also be a matter of opinion (which means you have to actually speak to enterprises and their supporting institutions) while crunching data and making graphs often feel safer and appear to be more rigorous.

My argument is that in regions, the long term evolution and growth of the economy is based on supporting diversification and the creation of options. These options are combined and recombined by entrepreneurs to create new economic value in the region, and in so doing they create more options for others. By focusing exclusively on scale, economic actors and their networks increasingly behave in a homogeneous way. Innovation becomes harder, economic diversity is not really increased. I would go as far as saying that success becomes a trap, because once a recipe is proven it is also harder to change. As the different actors becomes more interdependent and synchronized the system becomes path dependent. Some systems thinkers refer to this phenomena as tightly coupled, meaning a failure in one area quickly spills over into other areas. This explains why whole regions goes into decline when key industries are in decline, the economic system in the region became too tightly coupled.

But I must contradict myself just briefly. When interventions are more generic in nature, meaning they address market failures that affect many different industries and economic activities, then scale is of course important.

The experienced development practitioners manage to develop portfolios where there are some activities that are about scale (for instance, targeting a large number of informal traders) and then some activities that are about significance (for instance ensuring that local conformity testing labs are accessible to local manufacturers).

The real challenge is to figure out what the emergent significant economic activities are that improves the technological capability in the region. New emergent ideas are undermined by market failures and often struggle to gain traction. Many new activities requires a certain minimum economic scale before it can be sustained, but this is a different kind of scale than when practitioners use scale of impact as a selection criteria. Many small but significant economic activities cannot grow if they do not receive public support in the form of promotion, awareness raising or perhaps some carefully designed funding support.

There are a wide range of market failures such as high coordination costs with other actors, high search cost, adverse selection, information asymmetry and public good failures that undermines emergence in local economies. It is exactly for this reason that public sector support at a territorial level (meaning sub national) must be sensitive to these market failures and how they undermine the emergence of new ideas that could be significant to others. The challenge is that often local stakeholders such as local governments have limited influence over public institutions in the region that are funded from other spheres of public administration.

Let me wrap up. My argument is that scale is often the wrong place to start when trying to improve the innovation system in a region. Yes, there are instances where scale is important. But my argument is that some things that could be significant, like the emergence of variety and new ideas often get lost when interventions are selected based on outreach. Furthermore, the focus on large scale impact draws the attention to symptoms of problems and not the the institutional or technological institutions that are supposed to address market failures and support the emergence of novelty.

I will stop writing now, Marcus always complains that my posts are too long!

Let me know if I should expand on the kinds of market failures that prevent local economies from becoming technologically more capable.

 

 

Instigating innovation by enhancing experimentation

“We don’t experiment!”, the operations manager sneered at me. “We know what we are doing. We are experts”. From the shaking of his head I could form my own conclusions. It meant that this business has a very short term focus in terms of innovation, mainly using a consensus based approach to drive incremental improvement. The irony is that the word “expert” implies learning by doing, often over an extended period. The very people that become “experts” through experimentation and trying things become the gatekeepers that promote very narrow paths into the future, thus inhibiting learning in organizations.

The aversion to experimentation and its importance in innovation is institutionalized in management. Many of the textbooks on innovation and technology management does not even have a chapter on experimentation (see below for some exceptions). Many industrial engineers and designers actually narrow the options down so early in a process or product design so that what comes out can hardly be described as an experiment. Approaches such as lean and others make it very hard to experiment as any variation is seen as a risk. In more science based industries, such as pharmaceutics, medicine and health, experimentation is the main approach to innovation.

Most manufacturers do not like the idea of experimentation, despite it being widespread in most companies. If management does not see it (or hear about it) does not mean it is not happening. This is the main problem. Lots of companies (or rather employees) experiment, but the feedback systems into the various levels of management and cross functional coordination are not working. Learning by doing is hard to do in these workplaces. Furthermore, management systems that rewards success or compliance makes learning by doing almost impossible.

Let me first unpack what I mean with experimentation.

Experimentation is a kind of investigation, or an attempt to better understand how something works. It is often associated with trial and error. Sometimes experiments are carefully planned, other times it can be impulsive (like when people press the elevator button repeatedly to see if the machine responds faster). Experiments are sometimes based on a deep insight or research, then it is almost like a authentication or proofing exercise. Other times it is done as a last resort (two attempts to get the machine to work did not work is followed by hitting it with a spanner). This could be naive even a little desperate. (Suddenly the machine works and nobody knows what exactly solved the problem). While experiments can be to prove something, I believe that not enough managers realise that experiments is a powerful way to keep their technical people happy (geeks love tinkering) and a strong way to improve the innovative and knowledge capability of an organisation. What does it matter if this experiment was successful in 1949, why don’t we try it and see if we can figure it out? Remember, innovation is a process of combination and recombination of old, new and often distant capabilities and elements.

Experiments in manufacturers happens at different levels.

  • It happens spontaneously on the work floor, where somebody needs to keep a process going. Ironically often experiments in the work space is the result of resource constraints (like trying to substitute one component/artifact/material/tool for another. A lot of potential innovations are missed by management because feedback doesn’t work, or experimentation is not encouraged or allowed.
  • Experiments could also be directed and a little more formalized. Typically these experiments originate from a functional specialization in the business, like the design office or another function. In these experiments the objective, the measurement and evaluation of the experiment is valuable for management as it could create alternative materials, processes, tools and approaches viable.
  • At a more strategic level experimentation often happens when evaluating investments, for instance making small investments in a particular process or market opportunity. It could also be about experimenting with management structures, technology choices or strategies. Sometimes the workers on the factory floor bear the brunt of these “experiments” which are not explained as experiments but rather as wise and thoroughly through decisions. In companies that manages innovation strategically, decisions at a strategic level could involve deciding how much funds to set aside or invest in tools that enable experimentation, for instance 3D printing, rapid prototyping, computer aided design and simulation, etc.
  • Accidental experimentation occurs when somebody by accident, negligence or ignorance does something in a different way. This occasionally result in breakthroughs (think 3M), and more often in breakdown or injury. Accidental experimentation works in environments where creating options and variety is valued, and where co-workers or good management can notice “experimental results” worth keeping. However, in most of industry accidental experimentation is avoided as far as possible.

The above four kinds of experiments could all occur in a single organizations. At a higher level experimentation can also happen through collaboration beyond the organization, meaning that people from different companies and institutions work together in a structured experiment.

When you want to upgrade industries that have a tendency to under invest in innovation, you can almost be certain that there is very little formal experimentation going on. With formal I mean thought through, managed and measured. Proving one aspect at a time. It is often necessary to help business get this right.

Since this series is about instigating innovation in both firms and their supporting institutions it is important to consider the role of supporting institutions. One important role for supporting institutions is to lower the costs and risks of experimentation for companies. This could be through the establishment of publicly funded prototyping or demonstration facilities. Another approach is for supporting organizations to support collaborative experiments. However, I sometimes find that supporting institutions themselves are not managing their own innovation in a very strategic nor creative way.

Helping industries to improve how to conduct experiments need not be expensive and does not necessarily involve consulting services (many institutions are not organized for this). For universities there are some interventions that align with their mandates. For instance, exposing engineering students to formal experiments with strong evaluation elements (such as chemistry students have to go through) can also make it more likely that a next generation of engineering graduates are able to also plan and execute more formal experiments. Or creating a service where industry can experiment with technology within a public institution. Or arranging tours or factory visits to places where certain kinds of experiments are done, or can be done.

Lastly, not all experimentation needs a physical embodiment. Design software, prototyping technology, simulation software and 3D printing makes are all tools that enable experimentation and that reduces the costs and risks of experiments. Furthermore, experiments need not be expensive, but they should be thought through. I often find that companies want to create large experiments when a much smaller experiment focused on perhaps one or a few elements of the whole system would suffice. Here it is important to consider the science behind the experiment (at a certain smaller scale certain materials and process characteristics are no longer reliable or representative). The experiment must be just big enough to prove the point, or to offer measurement and comparison or functionality, nothing more.

I will close with a little story. I once visited a stainless steel foundry. These businesses are often not known to be innovative, but I was in for a surprise. The CEO of the foundry had a list of official experiments that were going on. Often each experiment had a small cross functional team involved, supported by a senior management champion. The aim was not to succeed at all costs, but to figure things out, develop alternatives AND increase the companies knowledge of what is possible. Everybody in the different sections of the business knew when experiments were taking place, and everybody was briefed on the results. Even though this is a very traditional industry, this company had managed to get their whole workforce to be excited about finding things out.

I promise I will get to the how in a future post in this series.

 

My favorite text books on experimentation in innovation are:

DODGSON, M., GANN, D. & SALTER, A.J. 2005.  Think, play, do : technology, innovation, and organization. Oxford: Oxford University Press. (I think this one is now out of print)

VON HIPPEL, E. 1988.  The sources of innovation. New York, NY: Oxford University Press. (Despite being an old book this is really inspiring)

THOMKE, S.H. 2003.  Experimentation Matters: Unlocking the Potential of New Technologies for Innovation. Harvard Business Press.

HARVARD. 2001.  Harvard Business Review on Innovation. Harvard Business School Press.

If you know of a book more recent then please let me know.

 

%d bloggers like this: