Instigating Innovation: Accelerating Experimentation in industry

When innovation centers, technology transfer centers, applied research platforms and other similar organisations want to help industry with innovation, one way could be to assist companies to experiment with new ideas. I will simply refer to these centers from here onward as innovation and technology support centers. In most of the places where I work these centers are often hosted by or associated with universities, applied research organisations or with technology transfer organisations.

One way to support industry to experiment is through various technology demonstration-like activities, allowing enterprises access to scarce and sophisticated equipment where they can try new ideas. In its simplest form, facilities allow companies to order samples to a certain specification, allowing a company to see whether a particular process can meet a specification or performance criteria. A slightly more intensive form of tech demonstration allows in visitors and a technology and its application is demonstrated (eyes only, no touching!). Very often equipment suppliers play this role, but in many developing countries equipment suppliers behave more like agents and can not really demonstrate equipment.

In Germany I saw demonstration facilities where the pro’s showed the enterprises how things works, and then they stood back allowing teams from a company to try things themselves.

A critical role of innovation support centers is to provide industry with comparative studies of different process equipment. For instance, in an innovation center supporting metal based manufacturers, providing industry with a comparison of the costs and uses of different kinds of CAD systems could be extremely valuable to industry.

Maker labs, Fablabs and similar centers all make it easier for teams that want to create or tinker with an idea to gain access to diverse technologies, reducing the costs of experimenting. However, the range of equipment in these labs are often not so advanced, but it can often be very diversified. In my experience these centers are very helpful to refine early idea formation and prototyping. However, to help manufacturers experiment with different process technologies, different kinds of materials, substitute technologies, etc. is the a binding constraint in many developing countries. The costs of gaining new knowledge is high, and due to high costs of failure, companies do not experiment.

Innovation support centers must be very intentional about reducing the costs of various kinds of experiments if they want manufacturers, emergent enterprises and inventors to try new ideas. These innovation centers can play a role by:

a) assisting companies to internally organize themselves better for experimentation internally

b) assisting many companies to organize themselves better for experimentation collaboratively

c) conducting transparent experiments on behalf of industry collectives

In my experience, graduates from science disciplines often understand how to conduct experiments because their coursework often involve time in a lab. They know basics like isolating variables, managing samples, measuring results, etc. However, engineering graduates often do not have this experience (at least in the countries where I am working most). For many engineering graduates, the closest they will ever get to an experiment is a CAD design, or perhaps a 3D printed prototype.

Therefore, it is necessary for a range of these innovation and technology support centres to assist companies at various hierarchical levels to experiment.

At the functional or operational level, organising for experimentation involves:

  • creating teams from different operational backgrounds,
  • creating multiple teams working on the same problem,
  • getting different teams to pursue different approaches
  • failing in parallel and then comparing results regularly
  • failing faster by using iterations, physical prototypes and mock ups
  • According to Thomke, results should be anticipated and exploited – even before the results are confirmed

At a higher management level, organising for experimentation involves:

  • Changing measurement systems to not only reward success, but to encourage trying new things (thus encouraging learning and not discouraging failure).
  • moving from expert opinion to allow naivety and creativity
  • Preparing for ideas and results that may point to management failures or inefficiencies elsewhere in the firm (e.g. improving a process may be hampered by a company policy from the finance department)

Getting multiple companies and supporting organisations to experiment together is of course a little bit harder. Management of different organisations have many reasons to hide failures, thus undermining collective learning. One way around this could be to use a panel or collective of companies to identify a range of experiments, and then these experiments are conducted at the supporting institution in a transparent way. All the results (success, failures and variable results) are carefully documented and shared with the companies. However, to get the manufacturers to use these new ideas may require some incentives. In my experience, this works much better in a competitive environment, where companies are under pressure to use new ideas to gain an advantage. In industries with poor dynamism and low competition, new ideas are often not leveraged because it simply takes too much effort to be different.

Promising ideas from experiments can be combined and integrated after several iterations to create working prototypes. Here the challenge is to help industries to think small. First get the prototype process to work at a small scale and at lower cost before going to large scale of testing several variables simultanously. An important heuristic is to prototype at as small as possible scale while keeping the key mechanical or scientific properties consistent. More about this in a later post. (Or perhaps some of the people I have helped recently would not mind sharing their experience in the comments?)

I know this is already a long post, but I will add that Dave Snowden promotes Safe2fail probes, where teams are forced to design a range of experiments going in a range of directions even if failure is certain in some instances. In my experience this really works well. It breaks the linear thinking that often dominates the technical and manufacturing industries by acknowledging that while there may be preferred solutions, alternatives and especially naive experiments should be included in the overall portfolio. To make this work it is really important that the teams report back regularly on their learning and results, and that all the teams together decide which solutions worked best within the context.

THOMKE, S.H. 2003.  Experimentation Matters: Unlocking the Potential of New Technologies for Innovation. Harvard Business Press.

 

Instigating innovation by enhancing experimentation

“We don’t experiment!”, the operations manager sneered at me. “We know what we are doing. We are experts”. From the shaking of his head I could form my own conclusions. It meant that this business has a very short term focus in terms of innovation, mainly using a consensus based approach to drive incremental improvement. The irony is that the word “expert” implies learning by doing, often over an extended period. The very people that become “experts” through experimentation and trying things become the gatekeepers that promote very narrow paths into the future, thus inhibiting learning in organizations.

The aversion to experimentation and its importance in innovation is institutionalized in management. Many of the textbooks on innovation and technology management does not even have a chapter on experimentation (see below for some exceptions). Many industrial engineers and designers actually narrow the options down so early in a process or product design so that what comes out can hardly be described as an experiment. Approaches such as lean and others make it very hard to experiment as any variation is seen as a risk. In more science based industries, such as pharmaceutics, medicine and health, experimentation is the main approach to innovation.

Most manufacturers do not like the idea of experimentation, despite it being widespread in most companies. If management does not see it (or hear about it) does not mean it is not happening. This is the main problem. Lots of companies (or rather employees) experiment, but the feedback systems into the various levels of management and cross functional coordination are not working. Learning by doing is hard to do in these workplaces. Furthermore, management systems that rewards success or compliance makes learning by doing almost impossible.

Let me first unpack what I mean with experimentation.

Experimentation is a kind of investigation, or an attempt to better understand how something works. It is often associated with trial and error. Sometimes experiments are carefully planned, other times it can be impulsive (like when people press the elevator button repeatedly to see if the machine responds faster). Experiments are sometimes based on a deep insight or research, then it is almost like a authentication or proofing exercise. Other times it is done as a last resort (two attempts to get the machine to work did not work is followed by hitting it with a spanner). This could be naive even a little desperate. (Suddenly the machine works and nobody knows what exactly solved the problem). While experiments can be to prove something, I believe that not enough managers realise that experiments is a powerful way to keep their technical people happy (geeks love tinkering) and a strong way to improve the innovative and knowledge capability of an organisation. What does it matter if this experiment was successful in 1949, why don’t we try it and see if we can figure it out? Remember, innovation is a process of combination and recombination of old, new and often distant capabilities and elements.

Experiments in manufacturers happens at different levels.

  • It happens spontaneously on the work floor, where somebody needs to keep a process going. Ironically often experiments in the work space is the result of resource constraints (like trying to substitute one component/artifact/material/tool for another. A lot of potential innovations are missed by management because feedback doesn’t work, or experimentation is not encouraged or allowed.
  • Experiments could also be directed and a little more formalized. Typically these experiments originate from a functional specialization in the business, like the design office or another function. In these experiments the objective, the measurement and evaluation of the experiment is valuable for management as it could create alternative materials, processes, tools and approaches viable.
  • At a more strategic level experimentation often happens when evaluating investments, for instance making small investments in a particular process or market opportunity. It could also be about experimenting with management structures, technology choices or strategies. Sometimes the workers on the factory floor bear the brunt of these “experiments” which are not explained as experiments but rather as wise and thoroughly through decisions. In companies that manages innovation strategically, decisions at a strategic level could involve deciding how much funds to set aside or invest in tools that enable experimentation, for instance 3D printing, rapid prototyping, computer aided design and simulation, etc.
  • Accidental experimentation occurs when somebody by accident, negligence or ignorance does something in a different way. This occasionally result in breakthroughs (think 3M), and more often in breakdown or injury. Accidental experimentation works in environments where creating options and variety is valued, and where co-workers or good management can notice “experimental results” worth keeping. However, in most of industry accidental experimentation is avoided as far as possible.

The above four kinds of experiments could all occur in a single organizations. At a higher level experimentation can also happen through collaboration beyond the organization, meaning that people from different companies and institutions work together in a structured experiment.

When you want to upgrade industries that have a tendency to under invest in innovation, you can almost be certain that there is very little formal experimentation going on. With formal I mean thought through, managed and measured. Proving one aspect at a time. It is often necessary to help business get this right.

Since this series is about instigating innovation in both firms and their supporting institutions it is important to consider the role of supporting institutions. One important role for supporting institutions is to lower the costs and risks of experimentation for companies. This could be through the establishment of publicly funded prototyping or demonstration facilities. Another approach is for supporting organizations to support collaborative experiments. However, I sometimes find that supporting institutions themselves are not managing their own innovation in a very strategic nor creative way.

Helping industries to improve how to conduct experiments need not be expensive and does not necessarily involve consulting services (many institutions are not organized for this). For universities there are some interventions that align with their mandates. For instance, exposing engineering students to formal experiments with strong evaluation elements (such as chemistry students have to go through) can also make it more likely that a next generation of engineering graduates are able to also plan and execute more formal experiments. Or creating a service where industry can experiment with technology within a public institution. Or arranging tours or factory visits to places where certain kinds of experiments are done, or can be done.

Lastly, not all experimentation needs a physical embodiment. Design software, prototyping technology, simulation software and 3D printing makes are all tools that enable experimentation and that reduces the costs and risks of experiments. Furthermore, experiments need not be expensive, but they should be thought through. I often find that companies want to create large experiments when a much smaller experiment focused on perhaps one or a few elements of the whole system would suffice. Here it is important to consider the science behind the experiment (at a certain smaller scale certain materials and process characteristics are no longer reliable or representative). The experiment must be just big enough to prove the point, or to offer measurement and comparison or functionality, nothing more.

I will close with a little story. I once visited a stainless steel foundry. These businesses are often not known to be innovative, but I was in for a surprise. The CEO of the foundry had a list of official experiments that were going on. Often each experiment had a small cross functional team involved, supported by a senior management champion. The aim was not to succeed at all costs, but to figure things out, develop alternatives AND increase the companies knowledge of what is possible. Everybody in the different sections of the business knew when experiments were taking place, and everybody was briefed on the results. Even though this is a very traditional industry, this company had managed to get their whole workforce to be excited about finding things out.

I promise I will get to the how in a future post in this series.

 

My favorite text books on experimentation in innovation are:

DODGSON, M., GANN, D. & SALTER, A.J. 2005.  Think, play, do : technology, innovation, and organization. Oxford: Oxford University Press. (I think this one is now out of print)

VON HIPPEL, E. 1988.  The sources of innovation. New York, NY: Oxford University Press. (Despite being an old book this is really inspiring)

THOMKE, S.H. 2003.  Experimentation Matters: Unlocking the Potential of New Technologies for Innovation. Harvard Business Press.

HARVARD. 2001.  Harvard Business Review on Innovation. Harvard Business School Press.

If you know of a book more recent then please let me know.

 

Posted in Uncategorized. Tags: , , . 2 Comments »

Recognizing competing hypothesis as complex

In order to improve the economic performance of an industry or a territory, it is important to recognize the current Status Quo of the economy. This is basically to understand “what is?”, but to also understand “what is possible next?”. You may think that local stakeholders, firms and public officials will know the answer to “what is going on now?”, but every time I have done such an assessment I have discovered new suppliers, new innovations, new demands and many new connections between different actors.

The benefit of being a facilitator, process consultant or development expert, is that we can move between different actors, observe certain trends, recognize gaps and form an overall picture of what we think is going on. It is very difficult for enterprises to form such a picture as they can only observe other firms from a distance.

The main challenge is about figuring out what can be done to improve certain gaps or to change the patterns that we observe. These are answers to “What is possible next?” questions . As Mesopartner, we always insist that any process to diagnose an industry or a region starts with the formulation of various hypothesis. This hypothesis formulation before we commence is not only about revealing our bias, nor only about figuring out what exactly we want to find out. It also helps us to figure out what kind of process is needed, the scope of the analysis and what different actors expect from the process.

Unlike in academic or scientific research, hypothesis formulation does not only happen in the early stages of a diagnostic or improvement process, it should be constantly reflected upon and expanded as we go on during the process of meeting stakeholders and analyzing data. This is where the importance of recognizing competing hypothesis within our team and between different stakeholders are important.This process is not about convergence, but about revealing what different actors and the investigator believes is going on.

Economic development practice is full of competing hypothesis that all seem to be very plausible. In a recent training event with Dave Snowden the consequences of not recognizing or revealing these competing hypothesis struck me. According to Dave, competing hypothesis that plausibly explains the same phenomena indicates that we are most likely dealing with a complex issue. For instance, in South Africa we have competing hypothesis about the role of small firms in the economy. One hypothesis is that small firms are engines of growth and innovation, therefore they deserve support. A competing hypothesis is that large firms invest more in innovation and growth, and that they are better drivers of economic growth. Both hypotheses are plausible – the issue is complex. Recognizing this complexity is very important, as the cause and effect relations are not easy to identify and they might even be changing – the situation is non-linear. (Marcus Jenal and I wrote a working paper on complexity in development). This simply means that to get a specific outcome, the path will most likely be indirect or oblique – cause and effect is not linear.

Why is it important to recognize competing hypothesis, or to know when some patterns in the economy or complex? The answer is that it is almost impossible to analyze a complex issue with normal diagnostic instruments. Complex patterns can only be understood by engagement, that is, through experimentation. Again, according to Dave Snowden, you have to probe a complex issue by trying several different possible fixes simultaneously, then observe (sense) what seems to work best under the current circumstances. The bottom line is that you analyze a complex issue by experimenting with it, not by observing or analyzing it.

The implication of this insight in my own work has been huge. By recognizing that many issues that I am dealing with are complex (due to competing hypothesis that are very plausible) and can only be addressed through direct engagement has saved me and my customers a lot of resources that was previously spent on seemingly circular analysis. I now use the hypothesis formation with my clients to try and see if we have competing hypothesis of “what is” and “what must be done”. Where the hypothesis seems to be straight forward, we can define a research process to reveal what is going on and what can be done to improve the situation. But when we have different competing hypothesis of what is going on, we have to immediately devise several simultaneous experiments to try and find an upgrading path. I thought my customers would not like the idea of experiments, but I was wrong.

The conditions are that you must take steps to ensure that there are many different experiments that are all very small, and that by design take different approaches to try and solve the same problem. This takes learning by doing to a new level – because now failure is as important as success as it helps us to find the paths to better performance by reducing alternatives and finding the factors in the context that makes progress possible. The biggest surprise for me is that this process of purposeful small experiments to see what is possible under current conditions (context) has unlocked my own and my customers creativity.

Perhaps a topic for a separate blog is that to really uncover these competing hypothesis we have to make sure that we do not converge too soon about what we think is going on. Maintaining divergence and variety is key – this is another challenge for me as a facilitator that is used to helping minds meet!

%d bloggers like this: