By Paul Wagorn, President of IdeaConnection
Loosely defined, crowdsourcing is a term used to describe using an open contest or crowd of people, all contributing to solve a problem or get something done. The term was coined by Jeff Howe in Wired Magazine in 2006, and was officially added to the Oxford Dictionary just last year in its June 2013 update.
The premise (and promise) or crowdsourcing is that it allows organizations to solicit the ‘wisdom of the crowd’ by connecting to a large number of people who share a common goal, all at once.
Although the term is new, the concept has its roots in history. In 1714, the British government needed a solution to what they called “The longitude Problem”. This navigational challenge, if solved, could save the lives of potentially thousands of sailors per year who perished at sea. The British government offered £20,000 (about $5 million in today’s money) to anyone who could invent a solution to this seemingly unsolvable problem. John Harrison, the son of a carpenter won the prize by inventing an accurate, vacuum-sealed watch. This early crowdsourcing experiment is a prime example of the concept that innovation and invention can come from anywhere, even from people outside of the industry.
There is an endless list of modern examples of crowdsourcing, even making the basis for complete business models, such as with Wikipedia. Crowdsourcing has also invaded pop culture, with television shows such as American Idol and dozens of other crowd-powered events making use of the concept.
Many companies have been engaging in crowdsourcing as a way of finding solutions to difficult technical problems. Some of these companies engage directly with the crowd, while others have used intermediaries or ‘expert networks’ who run crowdsourcing platforms as a product.
At least on paper, crowdsourcing seems like a great solution – put your problem out there, and wait for a solution from the masses. What many companies have discovered, however is that there are many unforeseen, even fatal difficulties they encounter while engaging in these crowdsourcing efforts. In some case these challenges completely eclipse the benefits, potentially resulting in an unfulfilled promise and a disillusioned corporate team who had both high hopes and high expectations.
I am going to discuss six of the main “gotchas”, and also offer some insight into the thinking that went behind the model that our company uses – it was purposefully built to resolve these problems and gets a lot closer to a platform that actually delivers on the promise.
Problem #1: Lack of confidentiality
Crowdsourcing usually involves a group of people who are unknown to you. Due to the risk of confidential information getting into your competitor’s hands, there are usually details about the problem, your business intentions, or the technology that you must keep hidden.
Even if the participants sign a non-disclosure agreement, you still can’t feel confident that you know who they are, where they are from, or for whom they work. There is only a certain amount of confidence that you can have in a large, anonymous group like this, and from a practical perspective this means that you need to hold back important information that could otherwise help them solve your problem – and this negatively affects the quality and scope of solutions that you receive.
If we could build something that still leverages the strength of the diversity of a crowd, but with better control over confidentiality, we would be able to enhance the disclosure of information to be able to arm them with important information, resulting in solutions that are much more on target, and therefore more actionable and valuable.
Problem #2: Lack of communication
Because we are reaching out to a group of people that we don’t know very well – and in many cases a lot of them, it becomes very difficult to communicate with all of these participants in a meaningful way, to help them achieve a deep understanding of the problem that they are to solve. Unfortunately If you don’t understand the problem, you can’t create a solution.
In a typical crowdsourcing scenario, no matter how well your problem statement is crafted, the result is still a one-sided conversation, where you are pushing information out to participants, with very little actual dialogue.
The result is that participants that are working to solve your problem are left to make assumptions, and these assumptions can lead to off-target and out of scope solutions, and therefore less confidence that you will get the solution that you need. All of this potentially adds up to a waste of time and money.
Problem #3: Risk of IP contamination
At least on paper, getting dozens or even hundreds of potential solutions to your problem sounds great, but when you start to think it through from an IP management standpoint, it can create a headache that many of us would like to avoid – massive potential for IP contamination.
This is the one that causes your IP/legal people to start sweating. More than likely, almost every solution that makes its’ way to you is a potential invention, and by definition contains confidential information about that invention. You will be reviewing (and exposing yourself to) dozens or even hundreds of confidential technologies. Of course when you make an award for the solution that you want to use, you will receive the intellectual property rights to that solution – but what about all the others?
You will need to make sure that none of your future work infringes on any of the rejected solutions, or risk exposure to an infringement lawsuit. How difficult will it be, four years from now, to remember that solution #76 from your crowdsourcing campaign had something to do with what one of your researchers could be working on?
Unless your crowdsourcing campaign includes terms that give you the rights to every submitted idea (which will seem unfair to participants who are not paid for their idea), this quickly becomes unmanageable.
What companies need is a model that significantly reduces the exposure to IP contamination, while at the same time accessing the very best solutions from the participants.
Problem #4: IP that isn’t clean
You reach out to the crowd, hoping to find brilliant, disruptive solutions, that in some cases you didn’t even think to ask for. It also means that you can get responses from people that you hadn’t anticipated hearing from.
Imagine this scenario: You enter into a crowdsourcing event for a solution to a technical problem. You send out the challenge, and receive many responses. One of the solutions is seemingly perfect – it’s a low-cost, unique and disruptive solution to your problem that hits all the buttons.
When it comes time to assign the rights to the solution to you, you discover that the submitter works for a university. They didn’t think much of it at the time, and didn’t bother to review their university’s lengthy IP policy document – presumably because they figured that the chance of actually being selected and paid was low. Upon inspection, it’s discovered that in the university IP agreement, the institution makes claim to all inventions by the submitter, making it impractical or impossible to transfer the IP to the company.
The result is a disaster. The submitter of the solution doesn’t get paid, the company doesn’t get their solution, and even worse – the company is now unable to use the solution to their problem. The perfect solution is now contaminated and off the table.
Now, imagine what would happen if this university IP policy was never actually disclosed or discovered – the submitter would have signed the IP assignment, and the company would start using the solution in their multi-million dollar product launch. Think about the legal disaster could be looming ahead for the company.
This can happen with universities, institutions and even people with employment contracts. The fact is, when you don’t know who you are dealing with, the risks are real.
Problem #5: Ideas are not the same as solutions
There are studies that show an effect of competitions where as the number of competitors increase, the average amount of effort that is put forth by any individual decreases. In other words, the risk/reward ratio appears worse, so participants are less likely to put in a substantial amount of work when there are a lot of competitors.
The result of this is that when you are crowdsourcing for solutions to problems using a large group versus engaging with a smaller number of competitors, what you typically end up with are ideas rather than solutions because there is typically a lot of work that is involved in transforming a mere idea into a fully developed, supported solution.
But ideas are all great, aren’t they? No. An idea needs context. What are the constraints? What is the reason to believe, and where is the supporting information? How will the idea be implemented, and how exactly will it solve the problem?
An idea without context or purpose is impossible to evaluate.
Unlike an idea, a solution has a specific purpose – to solve the problem. Solutions can be evaluated based on effectiveness, cost, risk and value. The most difficult part of many managers’ job is tossing out good ideas that are not actual solutions.
Many people would argue that an idea that is not completely thought through simply creates more work for the company, who needs to do all the heavy lifting of validating, supporting and thinking it through.
If crowdsourcing is providing only ideas, then even in the best case, it’s only part of the way there. In the worst case, it simply causes more work for the company, resulting in disillusionment in the process.
There is a sweet spot between having access to enough solutions that there is variety, but not so many that it reduces the quality of what the process produces. Based on interviews with corporate executives, I do not believe that conventional crowdsourcing has this balance right.
Problem #6: Burden on your technical staff
In some instances, crowdsourcing can work when you need to reach out and find an idea, but to truly take advantage of the power of using external expertise maximize its value, you need to be able to scale it beyond just the pilot program stage and turn it into a tool that can be used again and again. The best tools are the ones that you can integrate into a standard process and be used prolifically.
As we try to do this with traditional crowdsourcing, we encounter some difficult scaling and burden issues.
Consider that quite often when you make a decision to go outside of your company for a solution, it’s not necessarily because you are completely unable to solve a problem, but because you simply don’t have the resources to throw at it right now.
So, imagine that you turn to crowdsourcing to solve the problem. Let’s say you have a small audience of maybe 1000 people or so, and about 100 of them decide that they want to work on your problem. That’s one hundred people asking questions. That’s one hundred submissions. Your technical people will need to answer the questions and review one hundred technical papers – This is not a trivial job.
This defeats part of the reason that you decided to go outside your company in the first place, because everyone was already too busy!
Now, scale that up to 20 projects.
These are the types of problems that companies have been running into as they start to dip their feet in the world of crowdsourcing, especially as they try to make it an integral element of their ongoing technology strategy.
Most importantly, how do we take advantage of the good parts of crowdsourcing and build something better, something that aligns better with the real needs of business?
Many years ago, we thought long and hard about these problems, how to fix them, and how we could create a better tool than can tap into the experience and expertise of the crowd, but avoid the pitfalls that hold it back from prime time technical problem solving.
It became obvious that instead of working with individuals, there were huge benefits to forming interdisciplinary teams. Teams can tackle problem that are beyond the reach and scope of a single individual. By building collaboration into a model that was solely based on competition, we could increase the quality and breadth of the solutions, and solve many of the problems above at the same time.
Thomas Edison was once asked why he had so many assistants (he had 21 at the time!).
He said :
“If I could solve all the problems myself, I would!”
This idea of a solitary genius coming up with disruptive inventions is more the exception than reality. The popular image of Thomas Edison as a lone inventive genius was a smoke and mirrors image created to help build a brand rather than reality.
The simple fact is that teams can solve problems better than individuals.
Let me tell you a quick story – Some time ago, we had a team working on a particularly tough problem for a client. One of the members was a professor in the very subject matter of the project. His teammates were chosen because they had adjacent capabilities – it was a true interdisciplinary team.
The professor called us one day and said “I’m the only person on this team with considerable knowledge in this area, and the other members are essentially…. Neanderthals.” He requested to be split up from his team so he could work alone, as he already had a great solution. We split up the team, and the professor worked on his own, and submitted his solution. The team of Neanderthals also submitted theirs.
You can probably guess what happened – the expert’s solution was rejected by the client, and the team of Neanderthals came up with a truly disruptive solution to the problem, and were paid a substantial prize.
There is little debate – when formed correctly, collaborative teams outperform individuals, and because of this, teams make up the cornerstone of our model.
Here’s how the IdeaConnection process works:
IdeaConnection forms teams made up of various individuals from our network of screened, curated experts. These are retired and semi-retired people, academics, consultants, industry veterans and thought leaders from many different disciplines. We invite specific experts from our network to participate on a project based on their expertise, experience, availability, education, feedback from other experts and many other data points.
We hand-pick just the right people from the applicants and form a small number of teams, typically 4-6, with each team being led by a professional facilitator.
Each participant then signs a non-disclosure and IP agreement.
Once the team members have been through this process – screening and acceptance into our network, invitation to a project, hand selection, signing of the NDA and finally accepted into a team, they are finally given the full details of the project.
They never know the identity of the client.
Because the process allows us to have complete control over who gets to see the confidential details of the problem, the result is a significantly heightened level of confidentiality. This creates an environment where we can feel much more comfortable in being able to provide deeper information about the problem than if it were a public crowdsourcing exercise. The result is that participants have an enhanced understanding of the goals and scope of the problem, and leads to better solutions.
Another benefit to this process is that it gives us the ability to screen people before they even make it onto a team. If someone works for a university, we can check their university’s IP policy’s before they have a chance to even participate. Because we control who makes it onto a team, we can handle issues that may have an effect on the ‘cleanliness’ of the IP, greatly reducing the chance of disaster that looms in conventional crowdsourcing.
Because of the quality of our network and the attention we put onto the front end, we have never once failed to transfer IP cleanly and properly, something that most crowdsourcing cannot claim.
Once this team building process is complete, the teams are launched.
In our experience, a major reason that crowdsourcing often fails to deliver a workable solution is not because people are not capable or didn’t have the right knowledge, but because there is a misunderstanding of the project goals and scope.
Here’s what we did:
The first task is to arrange a short conference call between each of the teams and a technical lead from the client’s side. This allows each of the teams to ask questions, make sure they understand the problem and deliverables, and achieve a deep level of clarity of the task and expected deliverables.
The next thing that happens is each team reiterates the problem statement in their own words, along with how they understand the scope, and any assumptions that they are making. The client then reviews each of these, and is given a chance to give the teams feedback. This ensures that the teams understand the problem, scope, and make sure they don’t have any false assumptions.
Having a clear understanding of everything about a problem is critical to being able to solve it – you can’t solve what you don’t understand.
When we implemented just these two processes, it increased our success rate by about 25%
These two critical processes cannot work in a conventional crowdsourcing environment. You can’t reasonably have 100 conference calls, and you can’t review 100 problem reiterations.
On a typical project, the teams will work for a total of 10-12 weeks. At the end of this period, each of the teams turn in a solution.
Instead of receiving a solution from every individual (as in conventional crowdsourcing,) we’ve consolidated all of the knowledge down into a small number of teams. What comes out of these collaborative teams it is a small, but extremely high quality, well thought through, supported, and most importantly, manageable amount of solutions. The team members do the peer review and triage of all the team’s ideas, and the client is only presented with the ideas that have made it through this process.
The focus is on quality rather than quantity. Quantity creates burden. Quality creates value.
An additional benefit of the small number of teams is that we keep the risk/reward ratio much better for the participants than if they were individuals competing against each other – there is still a healthy competition, and input from a large number of people, but the competition is not so great that it results in reduced quality of work. The output is real solutions rather than just ideas.
Because the team goes through a screening process with all of the proposed ideas, and only submits the very best after careful consideration, we’ve also reduced the risk of IP contamination significantly.
One of the benefits of being on a team from the expert’s perspective is that the process is much more enjoyable that sitting around their basement working on it alone:
Some time ago, we had one of our experts contact us and tell us that working on these problems in teams is the best job he’s ever had, even if he were to never make any money at all doing it.
When people are enjoying themselves, they produce better results.
Our model has been proven again and again working on projects for some of the largest companies in the world. Clients have access to solutions from the crowd, while avoiding or diminishing the impact of the fatal flaws of crowdsourcing.
The beauty of it all is that we get:
…while at the same time creating a system that delivers the highest success rate in the industry.
By innovating the innovation process itself, it has led us to a more effective and safer model for crowdsourcing of technical solutions.
Find out more about what we do here.