Lately, I’ve been trying to get a bead on culture and the effect culture has on Project/Technology ROI and Implementation Cost. The research I have uncovered suggests something very striking. What I mean is that most, if not all, of the Culture Assessment tools I examined have a specific view point. That viewpoint is simple: That if you interview people with a certain list of questions, you can come to find the culture of your organization. The results are graphs, charts, and definitions. At the time I was examining these products, I had no problem with them. As time went on, I came to realize that there might be something missing.
On the surface, these methodologies seem like magic. I could see why they sold. But as I thought about it and reflected on my experience in corporate consulting, I came to a conclusion…
Most of the methodologies out there right now really don’t take the whole of their organization into consideration. They just think they do.
Interestingly enough, they claim to give the whole picture of the culture of the organization supported by their results, but I don’t believe they have been very successful up to this point . In my opinion, this is what the current Organizational Assessments promise to show you:
Here’s the kicker: This view commonly says there is ONE culture that describes 100% of the organization. That the whole of the organization has the same culture. I’m not sure that this is an accurate depiction of what is really going on in reality.
My position on why this occurs? Most culture assessments methodologies come to this conclusion is because of four main factors:
- Leadership is most often interviewed in a culture assessment which creates a bias.
- There is no cross-pollination of information before the survey results are tallied.
- Employees (and people in general) are trained and conditioned to assume culture is a top-down mandate.
- The current dominant POV is that culture is bound by the borders of the organization.
Well, I have news…Good news for some and maybe not for others.
Leadership, while it has influence, does not a culture make.
You see, there is another reason for organizational strife, bad ROI, silos, and poor project management in general. It is not just competition for scare organizational resources. It is also the competition between various, border-less, and unconstrained cultures vying for survival and influence. Cultures do not stay within departments (or even organizations)! Assessing only leadership’s perception of culture does not give a clear picture of how software, process, or procedure should be implemented. Each culture has its own way of integrating new information in the form of technological change and a more successful implementation team will understand and put this into practice.
What we really have in an organization are nested cultural nodes, or simply “Pocket Cultures“, which are all interacting in a very complex way. Through this interaction, the true organizational culture emerges. Some Pocket Cultures, like personalities, can be dominant in the organization. This does not mean the dominant culture is the correct one. It is just the one that uses strategic means to keep its position as dominant.
So, what does all this mean?
- Current organizational assessment tools are most likely ill-equipped to deal with a reality which takes this complex cultural interplay into consideration.
- Executives can expect a higher adoption rate and ROI if they understand the concept of Pocket Cultures.
- Implementation Project Managers should lead with an assessment of Pocket Cultures to find the best entry-point into the organization, giving them a much higher success rate.
Well, that’s my rant. I can see I have a lot of work to do on this idea. I’ll bring some of the big brains I know together to mull this over. Firstly, I will will be working on developing an assessment which takes pocket cultures into consideration. Everything after that is a hazy future-fog, but I bet you that it is fun out there!
For more on Organizational Types, see: Organizational Types or Wikipedia.
We are the music makers,
And we are the dreamers of dreams,
Wandering by lone sea-breakers,
And sitting by desolate streams;—
World-losers and world-forsakers,
On whom the pale moon gleams:
Yet we are the movers and shakers
Of the world for ever, it seems.
So, I was going to write about unemployment and how the job market has changed, but I got scooped by an amazing article by Drake Bennett called The end of the office…and the future of work. It is a great look into the phenomenon of Structural Unemployment. The analysis is very timely, but can go much deeper. Drake, if you plan on writing a book here’s your calling. There’s lots of good stories written on this subject out there by giants such as Jeremy Rifkin, John Seely Brown, Kevin Kelly, and Marshall Brain.
While reeling from the scoop, depressed and doing some preliminary market research, I happened upon a gem of a blog post by none other than our favorite search company, Google. Before proceeding on in my post, I do recommend that you do read the blog post by Steve Baker, Software Engineer @ Google. I think he does an excellent job describing the problems Google is currently having and why they need such a powerful search quality team.
Here’s what I got from the Blog post: Google, though they really want to have them, cannot have fully automated quality algorithms. They need human intervention…And A LOT OF IT. The question is, why? Why does a company with all of the resources and power and money that Google has still need to hire humans to watch over search quality? Why have they not, in all of their intelligent genius, not created a program that can do this?
Because Google might be using methods which sterilize away meaning out of the gate.
Strangely enough, it may be that Google’s core engineer’s mind is holding them back…
We can write a computer program to beat the very best human chess players, but we can’t write a program to identify objects in a photo or understand a sentence with anywhere near the precision of even a child.
This is an engineer speaking, for sure. But I ask you: What child do we really program? Are children precise? My son falls over every time he turns around too quickly…
The goal of a search engine is to return the best results for your search, and understanding language is crucial to returning the best results. A key part of this is our system for understanding synonyms.
We use many techniques to extract synonyms, that we’ve blogged about before. Our systems analyze petabytes of web documents and historical search data to build an intricate understanding of what words can mean in different contexts.
Google does this using massive dictionary-like databases. They can only achieve this because of the sheer size and processing power of their server farms of computing devices. Not to take away from Google’s great achievements, but Syntience’s experimental systems have been running “synthetic synonyms” since our earliest versions. We have no dictionaries and no distributed supercomputers.
As a nomenclatural [sic] note, even obvious term variants like “pictures” (plural) and “picture” (singular) would be treated as different search terms by a dumb computer, so we also include these types of relationships within our umbrella of synonyms.
Here’s the way this works, super-simplified: There are separate “storage containers” for “picture”, “pictures”, “pic”, “pix”, “twitpix”, etc, all in their own neat little boxes. This separation removes the very thing Google is seeking…Meaning in their data. That’s why their approach doesn’t seem to make much sense to me for this particular application.
The activities of an engineer would be to write code that, in a sense, tells the computer to create a new little box and put the new word in a list of associated words. Shouldn’t the computer be able to have some sort of continuous, flowing process which allows it to break out of the little boxes and allow for some sort of free association? Well, the answer is “Not using Google’s methods.”.
You see, Google models the data to make it easily controllable…actually for that and for many, MANY other reasons. But by doing so, they have put themselves in an intellectually mired position. Monica Anderson does a great analysis of this in a talk on the Syntience Site called “Models vs. Patterns”.
So, simply and if you please, rhetorically:
How can computer scientists ever expect a computer to do anything novel with data when there is someone (or some rule/code) telling them precisely what to do all the time?
Kind of constraining…I guess that’s why they always start coding at the “command line”.
Being the geek that I am and having the wonderful wife that I do, I will be heading down to Los Angeles this tomorrow morning to attend the Humanity Plus Conferrence.
I will be there representing Syntience Inc..
If you are going, see you there!
Real Geeks know The Prime Directive.
For those of you who don’t know it, the The Prime Directive is General Order #1 for space exploration in the T.V. series, “Star Trek”. Briefly put, it is a rule which states that if the crew of an exploring spacecraft encounters a civilization which is “pre-warp” (Or they have not developed interstellar space travel.), that civilization is off limits for contact. This doctrine has created many a story told in the Star Trek Universe.
There is wisdom to The Prime Directive which contains a message about observation. When I think of observation in the context of The Prime Directive, I ask myself , “Why wouldn’t it be possible to apply a rule of observation to the problem of safe Artificial Intelligence?”. What I mean is that one could speculate that when the time actually comes, we could apply this wisdom of observation to our own creations: To our sentient and self-aware computers.
This could be a type of observation which does not seek confirmation, but only seeks that which solves a problem usefully. This would remove a problem associated with the “experimenter’s observation” of testing a hypothesis to prove that hypothesis true. Specifically, we avoid the risk of the observer’s bias toward a specific result (which happens a lot in the cross-pollination space of reductionist science and natural systems).
The Productive Interface
As human thoughts and ideas are useful in the domain of humans, so may we find useful the thoughts and ideas of our Artificial Intelligences, a Productive Interface if you will. Perhaps through the rules of this Productive Interface they need never know they are being observed by their creators. This Interface should take actual problems to be solved, present them to the group being observed as their environment and see if they can solve the problem usefully and creatively, or in ways their human creators had not conceived. These situations could be real world problems solved in the electronic domain. Much like the Prime Directive, the only rule to this domain states:
“No human may directly interfere with the development of any artificial life or society by making themselves known to that being or society.”
By cutting off “standard” communication we may in fact save ourselves from ever having to deal with friendly or unfriendly computers. Perhaps we can provide them with a limitless loop of problems to solve which keeps them interested in themselves and their surroundings. That’s all they would need is the need and desire to learn (@pandemonica) and the goal of improving themselves. Maybe if we considered specific rules for communicating with our A.I., protocol droids that much feasible that much faster.