Enterprise 2.0 Boston Bait and Switch

31 03 2010

[tweetmeme source=”MarkTamis” service=”bit.ly”]
Enterprise 2.0 concepts and tools are gaining more and more traction in “mainstream” Business Practices as this is seen as a good way to captialize on the human assets of organisations. To reflect this trend, and to spread awareness and understanding and also to provide a platform for exchange of expriences, a conference is being organised at the Westin Boston Waterfront on June 14-17, 2010.

Apparently it was felt that the base premise of the subject was not enough to attract attention that the organisers had to resort to Switch and Bait practices to generate Buzz and use Command & Control decision making (how very Enterprise 1.0…). I commented on the announcement to draw attention to the dichotomy between their message of “letting the audience decide” and their actions.

Apparently, my opinion did not go in the sense they wanted so I guess they decided against publishing it. Or they’re not monitoring and thus didn’t get round to moderating yet – in that case, why provide a comments area?

Let me elaborate on the dichotomy through the use ofPrem Kumar’s  Context, Content, and Intent -at least  in my opinion…


For this edition the organisers decided to be innovative by enlisting the “wisdom of the crowds” to source Papers to be presented at the conference. They relied on the Spigit – which in my opinion is ideally suited for the task at hand (disclaimer: I am in no way affiliated with them). The expectation that was set was that the all Conference Topics would be chosen through the principle of ‘Wisdom of the Crowd’ as stated in the ‘How Things Work’ section, with the Conference Management just making sure that the tracks were balanced.  Proposals with the most votes would be part of the E2 Boston 2010 Conference.


The Call for Papers had potential speakers expose the their subjects of predeliction in a short summary and supporting documents in attachments (UGC). They were then actively encouraged to get people to vote for their subject (WOM), directing traffic to the site and generating buzz. The target audience was asked to add comments to the entries to instaur a dialogue – level 4 in Mitch Liebermann’s Social Interactions post


The original intent was to get a Conference Agenda that reflected the subjects the participants would be interested in and that would be rich and varied in teachings and facilitate the exchange of experience to propulse Enterprise 2.0 concepts and usage into organisations.

Although the programme put up is actually a very interesting one – the original context, content and intent were not respected. Out of the 30-odd sessions, only 8 are community-sourced less than a third.  The Conference Management or Advisory Board decided to disregard their own selection process and make their decisions in a completely opaque manner. To my knowledge, none of them interacted with the potential speakers at any time to get details, or a better understanding, either through comments on the community site or even through other means at their disposal such as blogging about the subjects put forward by the candidates, sending twitter messages, or sending email. Voting was started in January, and the speakers were informed on March 30 – with a long zone of no communication in between.

The Top Two community-voted Papers did not get in, the third speaker did, but just one of his subjects. Of the top 10, maybe 2 actually made the grade according to the Board. People put a lot of effort into coming up with interesting Papers, and their peers thought they were interesting enough to merit reading through, understanding, commenting and voting for (full disclosure – we had put in a Paper up concerning bridging scrm & e20). By neglecting the votes, the Board is showing an extreme disregard and disrespect for the candidates and more especially their audience – their customers.

The voting process turned into a popularity contests, with people actively asking to be shown ‘Twitter Love’ by their followers to get more votes – followers who potentially would not be interested in attending the event because their interests lie elsewhere. This all turned into a real buzz machine, driving a lot of traffic and awareness that this event would take place. While this is all fine and understandable and a good way to build interest for Enterprise 2.0, it was done with the wrong Intent  and thus under  false pretenses. Trust has been squandered.

Through their actions, the organisers have also seem to think that:

  • collaboration and ‘wisdom of the crowd’ is not a valid way of selecting Papers
  • conversation is good between the clients of their ‘product’ but decisions should be made by a ‘Management’
  • feedback and management participation is absolutely not necessary

Now what was the Enterprise 2.0 way of working supposed to promote again..?

I am not saying that the ‘Wisdom of the Crowd’ is the most suited way for selecting interesting presentation subjects, but using the Switch and Bait technique is a deceptive Business Practice and reflects badly on the event as well as the validity of the Enterprise 2.0 Business Case. Although this is not at the scale and will not have the impact of the Nestlé debacle, the organisers are showing a there is a disconnect between their actions and the expectations they have set for the consumers of their product. It is kind of like saying ‘What is good enough for the customers of the tools we sell, is not good enough for us – we still manage our business as usual!’. The Advisory Board reached out to the consumers of the E20 Conference product to engage with them through ideation, but has done only half-heartedly. You need to go whole full nine yards! Moreover, in the public arena there is no hierarchy or HR to put a muzzle on the people that voice their opinions.

Transparency and authenticity can generate a lot of Goodwill and potentially a high level of participant engagement, but can just as easily backfire.  You can’t only just ‘pretend’ to be transparent by putting in a tool and not following through with actions – or ‘living the culture’, especially if you expect to be trusted in return. Changing the rules and not informing people about that when the result does not meet your goals is just Bad Practice. And thinking that people would not notice is just silly. I already know of some that will not bother with putting in a Paper for the Fall edition of the #e20conf…

It would have been so much easier to have been transparent in the selection rules, once the expectations set you can then meet them – the math is easy. This would have avoided the Bait and Switch and would have allowed the Trust Relationship to be continued. This is actually turning into a case study in Social Business; the need to coordinate Social CRM and Enterprise 2.0 Strategies – thank you #e20conf!

When you Talk the Talk, you should also Walk the Walk!

I hope this is seen as Food for Thought. What do you think, am I right to bring this up in this manner?



37 responses

31 03 2010
Tweets that mention Enterprise 2.0 Boston Bait and Switch « Social CRM ideas by Mark Tamis -- Topsy.com

[…] This post was mentioned on Twitter by mikeboysen, Mark Tamis. Mark Tamis said: Enterprise 2.0 Boston Bait and Switch by @MarkTamis http://bit.ly/dx4Oq4 #e20 #e20conf […]

31 03 2010
uberVU - social comments

Social comments and analytics for this post…

This post was mentioned on Twitter by MarkTamis: Enterprise 2.0 Boston Bait and Switch by @MarkTamis http://bit.ly/dx4Oq4 #e20 #e20conf…

31 03 2010

Hi Mark –

While I cannot speak on behalf on the entire advisory board (we’re a relatively loose knit group and I’m helping with a small part of the agenda) I will add my perspective on this.

Crowdsourcing needs to be balanced with some organization and structure for it to be most effective. Especially in a situation like this where voting is relatively easy and the voters are a self-selected group that may or may not represent the target audience, titles and popularity tends to weight the voting results (I’m speaking in general not in regard to this particular situation). I think it is perfectly valid to put an editorial lens on top of the crowdsourced results to insure an interesting conference experience.

Secondly and more specifically – I am moderating a panel that was not a submitted idea and it is because it is something that addresses what I believe to be a confusing topic in the market right now. I am, however, seeking out individuals who submitted topics that were well received but not selected to participate on that panel. So, I’m trying to balance an editorial eye with people who have expressed interest in speaking on a similar subject – and have gotten votes/crowd attention for it.

Like most of community management – using a crowdsourcing technique requires a balance and it’s fair to say that the balance may not have been achieved correctly from your perspective but we’re all doing our best to navigate how to manage these new collaborative dynamics.

31 03 2010
Mark Tamis

Hi Rachel,

I fully understand the need for creating a balanced Agenda and putting an editorial lens on top of the crowdsourced results, and as such I think the outcome is all the better for it. What I do however dislike is the approach chosen. Expectations have been set that the event would consist of Papers put forward and chosen by peers. There was an enormous level of excitement, and many people voluntarily used their personal brands to publicise the event. The rules were then ignored by the Advisory Board and now reflect top-down decision taking. I think it is fair to say that the balance is completely skewed.

Had it been clear from the outside that one quarter of the sessions would be crowdsourced and that the outcome of that would be respected, the Board would have acted in a transparent manner. Now the event risks losing credibility and lower participation in the Fall’s Call for Papers and thus reflect negatively on Enterprise 2.0 efforts. Trust needs to be earned – once you abuse it, it will be extremely difficult to earn back.

I appreciate your efforts to navigate the collaborative dynamics as we all are trying to find our way, but this one was easy to avoid by acting in a transparent manner.

Do you agree with this?

31 03 2010

Hey Mark,

I think you are right to bring it up. I would have maybe waited a few more hours to give them a chance to respond considering timezone difference (no excuse I know). However, after reading the whole bait and switch story, I don’t really see why anyone shouldn’t speak up wherever they want, whenever they want.

I don’t understand why they would ask people to vote and then disregard the votes? Did they even issue an explanation?

Wrong on so many levels and very un-2.0ish. Shame on them.

31 03 2010
Mark Tamis

Thanks for sharing your pov, Frédérique!

FYI the comment I posted is still awaiting moderation 16.5 hours later…

The Social Customer does speak whenever and wherever they want, whether she be right or wrong the message is out there for all to see.

Like any product-centric organisation thinks; “The product is good, so the people will come”. The only thing is that in this case they were seeking to engage through community sourcing for co-creating value with their audience to provide a better experience for all – and then decided against it…

No explanation so far other than Rachel’s comment above. Very un-2.0ish indeed.

31 03 2010

Mark – I am coming in late to the situation (I am a relatively new addition to the E2.0 Advisory Board) so I don’t have enough background to speak to the expectations that were set initially and I’d be speaking out of turn to even address the issue but I’ll let the others know you have questions about the process.

31 03 2010
Mitch Lieberman


In full disclosure, I am in a similar position and of similar thinking to Mark. I noticed that you are a track chair for the conference. What I am not sure I understand is how the speakers were chosen for your track (hopefully something you can speak to). To me, this issue is not about crowd sourcing, it is about expectation management. What guidance were you given selecting the speakers? The wording SpigIt was very clear:

“Selected Sessions: This will be the final stage. Advancing here is based on community votes and the approval of members of the Enterprise 2.0 Conference Advisory Board. Proposals with the most votes will be part of the E2 Boston 2010 Conference.”

As a practitioner, wouldn’t understanding the expectations of the community be one of the very first topics discussed? I say with all respect, not trying to blame. Expectation management is in the top 3 rules of IT project success, I do not see any other interpretation of the guidelines set.

On the same page as the above quote, was guidance given to tweet, Facebook, Yammer, Posterous…everyone should have been clear what would happen, the Advisory board is a very smart bunch. So, now the Advisory board needs to look very carefully at the response to issue that has been raised – for this event, not the next one.

31 03 2010

Hi Mitch –

There was a bias from the advisory board that we try to focus on case studies and incorporate from the crowdsourced ideas as much as possible but left room for our discretion. In my case, Mike & I discussed the 3 biggest topics we felt we needed to cover (the track having 3 slots) and are backfilling with both crowdsourced and other case studies we are familiar with. We have not even announced all the speakers/panelists yet. I am in the process of confirming a couple of panelists who had submitted topics that were very similar to the final presentation topic. So while not formally ‘selected’ other people who submitted ideas through Spigit will be included in our track. Interestingly enough none of the things I had suggested or panels I was part of were selected either so it’s not like I slotted myself although because we only have 3 slots we are doing a couple of panels to incorporate as many perspectives as possible.

31 03 2010
Denis Pombriant

The alternative interpretation is that pure crowd sourcing is a broken idea. Not that it’s bad, but it’s insufficient for the situation and I suspect this proved itself out in secret sometime in February. The issue is that you don’t know what you don’t know and crowd sourcing — even the best crowd sourcing — can only reflect the thinking and knowledge of the crowd at the time. My favorite example of this is Monty Python’s skit “She’s a Witch” from Holy Grail. The underlying assumption of the crowd is that witches are real so it is no problem for the crowd to condemn a woman for witchcraft.
Somebody has to be the adult, someone has to know more than the attendees and that someone has responsibility for providing a varied program that challenges and people and provides unexpected insights. A crowd that goes to a conference and predetermines what it wants to hear, won’t be learning much. It will be more like a concert of some superannuated group’s greatest hits. That’s not how to make progress.
The real issue with Crowd sourcing is that there are few controls on how often people vote or who votes. I cringe when I hear about “showing Twitter Love” as a basis for a rational selection process. Membership is not participation.

31 03 2010
Mark Tamis

Hi Denis, thanks for chiming in 🙂

I love that Monty Python Sketch! I hear your point about the crowd choosing what it is familiar with at that particular point in time.

I personally am much in favour of a ‘smartsourcing’ approach derived from an ongoing social learning process (I did a post about it here). There was actually the opportunity for the Advisory board to engage in such activities through the comments section on each idea, so as to shape the direction where they wanted each Paper to go for the greater good of the Conference Agenda, but they did not take it.

The point I am really trying to make is about setting and meeting expectations, be it to your customers, to your employees or to any other actor in your ecosystem.

31 03 2010
Mitch Lieberman


As I said to Rachel above, I believe this is much more about expectation management than Crowd sourcing. The E20 Advisory Board never once tried to moderate the the conversation, that I am aware. Things like “hey, do not just vote for your friends”. There has been 2 months since community vote stopped for track chairs to work with the community. This whole process is a microcosm of E20 in action, by the smartest E20 people in the country, no?


To your second point, I would think the folks from SpigIt need to answer that, I believe there were controls put in place.

31 03 2010
Brent Leary

Hi Mark,

I haven’t been keeping up with this, so I may be speaking out of turn. But from what you laid out it sounds like they made some serious mistakes with this one. They got people’s expectations up by saying one thing, then they didn’t follow through the way people expected. And they didn’t communicate with those who submitted proposals for a couple months – that’s a long time to leave people hanging out there. And then not publishing your comment was not good either.

Maybe if they had communicated more effectively about what was going on people wouldn’t feel so disappointed about how things developed. As you said it’s not about the agenda lacking quality, because it certainly doesn’t. It’s more about leading folks to believe one thing, and producing something else – without any warning.

I don’t know what was intended with the submission thing. Maybe they intended to have the majority of sessions come from submissions. Maybe they were planning on letting the community votes determine which submissions to accept. And maybe they had to change their thinking in mid-stream and had to go to Plan B. If that was the case, I bet if they had communicated with folks, people would have understood – and at least appreciated the efforts to provide transparency.

We’re all still dealing with how far our words and intent travel now, which means we need to be even more careful of what we say, how we say it, why we’re saying it, and to keep communicating with people when they respond to us. Hopefully the event organizers will take something away from this.

Thanks for pointing this out Mark, but I’m sorry you had to.

31 03 2010
Mark Tamis

Hi Brent,

You captured the essence of it. I was sorry to have to point this out as well!

Words and intent travel do travel far now, and it requires a continous effort to have a meaningful conversation as peers rather then one filtered through (or bounced off) the firewall. This will be the new normal, and any enterprise that does not organise for it exposes itself to risk.

Enterprise 2.0 has great potential to mitigate this risk by facilitating information flows, reducing friction and identifying the best assets to “respond to the customer’s control of the conversation”.

So rather than the event organisers not communicating or engaging, they could have kept the goodwill by acting in a transparent and consistent manner.

Thanks for your comments Brent!

17 07 2010
Phil Simon

I’m new to the discussion as well, but I agree with Mark’s comments:

I haven’t been keeping up with this, so I may be speaking out of turn. But from what you laid out it sounds like they made some serious mistakes with this one. They got people’s expectations up by saying one thing, then they didn’t follow through the way people expected. And they didn’t communicate with those who submitted proposals for a couple months – that’s a long time to leave people hanging out there. And then not publishing your comment was not good either.

To me, you can’t have it both ways. If you want people to vote, then respect their choices and live with them. If you have determined your speakers and topics in advance, then that’s fine too. Just don’t pretend that you’re interested in our opinions.

This seems to smack of link baiting. Of course, I could be wrong.

31 03 2010
Chris Selland

Interesting – thanks for posting this Mark. As someone who had responded to the initial call for papers, received a few nice comments, and then basically heard nothing I was wondering what had transpired here. Thanks for shedding light.

I will agree however that crowdsourcing alone wouldn’t have worked. It was quite apparent by all the ‘vote for me’ tweets I saw from numerous other speaker candidates that any crowdsourced ‘system’ would be ripe for gaming (i.e. the same type of ‘you vote for me, I’ll vote for you’ stuff that goes on every day on Twitter). In light of that I do believe that any event organizer should have – and must have – the ability to make decisions in light of what is best for their event regardless of what the ‘crowd’ says – but better communication certainly could have helped obviate any hard feelings.

In any case, hopefully the event organizers are paying attention. As Rachel’s posts indicate, I strongly suspect they are.

31 03 2010
Mark Tamis

Hi Chris,

Thanks for dropping by! I think in this case crowdsourcing would have worked to identify subjects of interest and guide the Agenda in that way, but the organisers should not have said that “Proposals with the most votes will be part of the E20 Boston Conference”.

The Spigit platform actually protects against gaming (double-counting etc), but not against “encouraging” any of your twitter followers to cast a vote – which could be seen as gaming as well… hopefully Hutch Carpenter of Spigit will chip in on this a little later on…

The Enterprise 2.0 Conference has its training wheels on, similar to many organisations that are trying to find their way. In Social CRM we encounter this all the time, and believe #e20 has great potential to provide a part of the answer on how to organise around meeting expectations. This ‘incident’ is an occasion to learn some lessons – a case study in the making – now let’s move forward from here!

31 03 2010
Steve Wylie

Hi Mark. I chair the E2 conference so I can address this issue with first-hand knowledge.

Let me start off by saying that the process we outline does say that final approval of sessions requires the approval of the advisory board. That said, if that wasn’t 100% clear by then we need to address that and will address that for the Fall event.

I’m just trying to strike a balance here and make sure that every session that finds its way into our agenda is completely vetted. I believe the popular vote is an important part of that decision but as a few people in this thread mention the balance of popular vote with input from 3rd party, objective individuals is also required. It was never our intention to base the selected sessions purely off of popular vote but obviously we’re not making that clear enough in the process description and need to fix that.

Last year 6 sessions were selected from the call for papers. This year we’ll end up with about 12 or 13. If the community want a larger portion of the sessions to be committed to the community vote we should look at that.

My point here being that this is a process for us and we’ll continue to tweak things to find the right balance. So what do others think? Are we close to finding the right balance here or would you like to see more sessions based on popular vote next time around?

31 03 2010
Mark Tamis

Hi Steve,

Thanks for getting back, I appreciate it.

The ‘issue’ here is not about having more or less community-sourced sessions, it is about setting and meeting expectations that are communicated clearly and dealt with transparently – with constructive feedback given in a timely manner so that adjustments can be made.

If you say you will set aside 25% for community-sourced sessions I have no problem with that – you need to have slots for your sponsors and your preferred speakers. I know that in this case some of the people that put in Papers wouldn’t have wasted their time and goodwill tweeting the event to their followers and would have lobbied the Advisors directly…

The whole tone of the entire page in SpigIt sends a message that E20 is a community driven conference. Are you saying that out of 500 submissions, everyone has it wrong? The idea of community is to work with, and extend the ideas of the community.

Be clear and consistent in your messaging and engage the audience you are trying to attract.

31 03 2010
Hutch Carpenter

Hey Mark –

A few thoughts. First – this wasn’t anonymous voting, where someone could endlessly click on a given idea. It was one user-one vote per proposal. But certainly people are within their right to get others to vote for them (Twitter, Facebook, etc.). I know I did it, and really, why wouldn’t you?

Aside from providing the platform for managing the Call for Papers, Spigit had a submission there as well. But alas, it wasn’t selected. Sure, I’m disappointed. But I’m fine with that. Why?

Crowdsourcing rarely should ever be run as a straight popularity contest. I wrote about four different models for competitive crowdsourcing here:


The E2.0 Conference Call for Papers is an example of the first model described in that post: Crowd Sentiment, Expert Decision. The flow of that model is:

– Crowdsourced submissions
– Crowdsourced feedback
– Selection by experts

Here’s how I describe this model:

“The Crowd Sentiment, Expert Decision model allows organizations to include the sentiment of the crowd as part of their decision-making process. This is valuable input for contests where the selected submissions will ultimately be put in front of the market. The crowdsourced feedback provides an early read on the potential market reaction.

“This model is also ideal for cases where a collaborative spirit can refine and improve submissions. Especially for more complex contests, feedback from interested collaborators is valuable for fully understanding the opportunity in the submission and its weaknesses.”

Steve Wylie is in the business of putting on conferences. He’s the Expert here. Now remember, we were crowdsourcing via SurveyMonkey before. Did you have any visibility into the process there? I didn’t. Voted once, never saw anything thereafter until sessions were announced.

What’s changed here is much greater transparency. With all its benefits and even warts (I’m sure E2.0 managers in companies can relate). Give me this transparent process where the community does have greater sway. And you can see in Steve Wylie’s comment above, the opportunity to continue to influence the process.


31 03 2010
Mark Tamis

Thank you for replying Hutch, I was looking forward to your input 🙂

I agree with the premise of this model, however what I found missing was the collaborative part with the members of the Advisory Board – something to work on the next time around? The Board Members can be effective collaborators in this process, as they have a deep understanding of the context that may be lacking with other participants.

For me the objective is that Enterprise 2.0 Conference Organisation becomes transparent and trustworthy, it is clear in the expectations it sets and consequently meets them. I hope this conversation has contributed to that.

1 04 2010
Mitch Lieberman


I appreciate your thoughts and ideas, speaking from your experience, which is greater than mine for sure. It does help me to put things in perspective a little bit. About the most positive outcome in all of this is this post and discussion. What I am trying to understand, maybe it is just semantics, is the difference between open and transparent (throw collaboration into the mix as well). Yes, the process is a little bit more transparent (I can see what is happening) – but it is not really open or collaborative, as I do not have any influence on it – and it gave nothing back to me. I have some influence on the fall event, but not on this one.

You suggested that this years process was better because it was not SurveyMonkey. Better for the conference, as people Tweeted it and sent links out, it was Marketed. How was it better for the submitters, or the final agenda? It is still not clear if the number of votes played any role at all. Not one comment from the Board suggested that vote count mattered, or influenced. Everything we do is a value exchange, what did the submitters – the community get back?

I am not trying to nit pick, I am trying to learn. Steve made a comment below which suggested that SpigIt was better, why is that the case? Honestly, I am more frustrated as I see how well you, Mark and others did, yet did not get selected…

31 03 2010
susan scrupski

Ah. I was waiting for Steve to respond to this. Steve is correct in that, with Spigit’s help, the process was a lot more democratic this year. I know I only chose from the top 100 for 4 of my 6 sessions in my track. This is the first year the board is selecting by track, which makes the process easier. (Last year, we had to individually and collectively rate/rank over 500 sessions.) I agree with Mark and others here that the communications on the process selection sent a different message and needs improvement. As such, I’m certain we will be fixing this. I ask the “community” to please not misconstrue oversight for arrogance. We are listening and want to create the best possible conference experience (front-to-back) for all.

31 03 2010
Mitch Lieberman


I appreciate your comments here, really. Mark has done a great job opening this up to a wider audience. Thus, I am borrowing the platform, if you will. I am not sure what you mean by democratic, before I jump to a conclusion I figured I would ask. The top 2 vote getters did not make it in?

The proposal guidelines do talk about approval, which I understand, and think is important. If you breakdown the numbers a bit, on any particular track, there are only 15 or so in the top 100, right? If something is not approved, and it is in the top 15 then by default is disapproved. Do you think that the conference board should say why? As Hutch stated in his comment, this is part of what E20 is about “a collaborative spirit can refine and improve submissions”. This is what many who push for E20 adoption are strongly suggesting, no?

Looking at this from the other direction. Were all of the topics that have been selected put on Spigit? I agree that to some degree popularity plays a role, but to select topics, or presenters that did not even submit to the community or were not even in the top 100, how does that fit in?

31 03 2010
Steve Wylie

I think the bottom line here is that we need to be much clearer in the selection process language. I take full responsibility for that and will make sure we fix it for the Fall event. I think the Spigit system itself is vast improvement over the old process but we’re also learning here on how best to use it. My apologies for any confusion on this and please know that our only intention is to provide a great program.

5 04 2010
Mark Tamis

Hi Susan,

Thank you very much for joining in on the discussion. I think the ‘issue’ actually goes beyond just the process. I will point to your own excellent post in reply as to what I mean: http://itsinsider.com/2009/12/31/practical-advice-for-2010-on-2-0-adoption/. It would have been great to have seen a more active Board engagement in the crowdsourcing initiative. I hope that this all has been constructive for the next iteration of the Enterprise 2.0 Forum!

1 04 2010
Esteban Kolsky

Steve and Susan,

This is not about language or process. This is exactly, in my opinion, the reason that E2.0 projects fail. It is a mentality problem, and I throw myself in the culprit cauldron. don’t think it was arrogant on purpose, but it did show as arrogant in results. and i can only say that results speak louder than intentions unfortunately.

It should have never been put up for a vote. many other methods are better; the processes we use for voting are easily manipulated and numbers don’r reflect true collaboration.

It could have been done in a collaborative manner: track managers put up their topics for discussion, interested parties come in and comment on them and provide their input. Track managers can then talk to the community members and draw better insights into what they can offer, put up a track schedule and open it for comments — not votes — for improvement. Then they can have a crowd-built track with plenty of material to complete presentations. true collaboration, not manipulated results that are ignored.

If you want to highlight the E2.0 process and collaborative processes, don’t waste time running popularity contests. No worth your time, or mine.

If you are just going to do what you think is right — again, don’t waste your or my time with it. Just do it. None of those processes would be frown upon, the lack of a cohesive message and process will.

Just my two cents.

(disclosure: i had submitted a presentation, it came in second in its track and did not make it. no worries, still will be there and will get value and contribute to the process, just hated the lack of cohesiveness in the process.)

1 04 2010
Laurence Buchanan

Mark – thanks for opening up an interesting debate! I have had no involvement with the conference- I didn’t submit papers or vote etc, so I didn’t see this unfold from the customers perspective. However the debate is a good one..

Some key takeaways stand out for me:

1. As Paul Greenburg has written a number of times the customer now controls the conversation. The conference organisers appear to have outwardly acknowledged this by ceding control of the agenda to their customers.

2. As Estenan points out, ceding control through tools is worthless (and in fact dangerous) without a complimentary mindset and cultural change from inside-out to outside-in.

3. Transparency is essential to building trust. In this case maybe the conference organiser cared more about the social marketing value of crowdsourcing than the value of true engagement? Look how the same policy hurt Eurostar…

4. Expectations are everything. If you give customers control then take it away, then don’t be surprised if they react badly. No one like shifting goal posts.

5. The problem with crowdsourcing is that the customer is not always right! As both Chris and Esteban point out, on it’s own crowdsourcing is probably not the best way to define a conference agenda. Henry Ford designing a faster horse springs to mind!

Thanks for raising the debate and credit to the organisers for listening and responding!


1 04 2010
Mark Tamis

Hi Laurence

I think you did a good job of listing the takeaways, thank you! I do appreciate that the organisers are listening and responding, and I hope this conversation has brought about the understanding that there needs to be a change in mindset and not only in processes. Processes should reflect how you want to engage and should support your intent.

Rather than first deploy a tool, and then work out the strategy, the strategy (and philosophy) comes first and only then should you look at how to implement it.

I think the Enterprise 2.0 world is slowly coming to this realisation (too focused on the tools and not enough on the objective they want to achieve), and in this sense this debate was timely. Hopefully this will have a positive influence on E2.0 and the Conference they are organising in June!

1 04 2010
Most Tweeted Articles by CRM Experts: MrTweet

Your article was most tweeted by CRM experts in the Twitterverse…

Come see other top popular articles surfaced by CRM experts!…

1 04 2010
Most Tweeted Articles by Enterprise Experts: MrTweet

Your article was most tweeted by Enterprise experts in the Twitterverse…

Come see other top popular articles surfaced by Enterprise experts!…

2 04 2010
3 04 2010
Do not leave crowdsourcing to the “wisdom of the crowd” « Random Thoughts of a Boston-Based CTO: John Moore's Weblog

[…] April 3, 2010 — John Moore I read a good post by Mark Tamis, titled Enterprise 2.0 Boston Bait and Switch, that focused on the failures of the Enterprise 2.0 Conference in terms of their selection […]

4 04 2010
Geordie Adams

Interesting post and discussion. Clearly many of those engaging in crowdsourcing are still learning. We see it in our engagements and I think this is another example. It is easy to see through the dialogue that there was not any intentional “bait and switch” but some some choices and communications were poor in retrospect but no one’s fault directly. To your last point Mark, we stress repeatedly and firmly that focusing on a tool and platform in crowdsourcing is akin to Web 1.0 when organizations focused only on building a shiny website came away disappointed. It is absolutely important but it is only one part of the process and path to success. The issue here was a) not establishing a primary/participant and secondary/committee review system b) sponsors decision to change the selection mid-process and c) (to me the critical one) not communicating why the change the process as openly as perhaps they should have. Nothing egregious but I can understand why some would want to chat openly about it…Geordie

5 04 2010
Mark Tamis

Thanks for joining in Geordie.

Crowdsourcing does have great potential imo. What really went awry here was that a tool was put in because it seemed to be a good thing to do, and that it stopped there.

Active participation by the Board members to discuss and refine ideas could’ve led to an event where everyone got the most out of the collaboration. Changing rules didn’t help here, but the main point was that if you’re serious about an initiative, you need buy-in at all levels and follow through all the way.

If you’re not doing so already, I suggest you follow the writings of Hutch Carpenter (@bhc3 – he commented above), he has a lot of insights in the field of crowdsourcing.

6 07 2010
Breaking Rant: Fast Company is Incredibly Stupid @ crm intelligence & strategy

[…] You are probably as tired as I am of seeing the tweets  from people you follow (or used to follow in some cases) asking you to click their link so they can show their influence in the dumbest project since SXSW voting (I was going to say Enterprise 2.0 voting, but fewer of you might have gotten the reference – Mark Tamis blogged about that one). […]

22 01 2012
Breaking Rant: Fast Company is Incredibly Stupid - mikeboysen.com

[…] You are probably as tired as I am of seeing the tweets  from people you follow (or used to follow in some cases) asking you to click their link so they can show their influence in the dumbest project since SXSW voting (I was going to say Enterprise 2.0 voting, but fewer of you might have gotten the reference – Mark Tamis blogged about that one). […]

%d bloggers like this: