Culture: Old and New Assumptions

January 10, 2011 3 comments

Lately, I’ve been trying to get a bead on culture and the effect culture has on Project/Technology ROI and Implementation Cost.  The research I have uncovered suggests something very striking.  What I mean is that most, if not all, of the Culture Assessment tools I examined have a specific view point.  That viewpoint is simple:  That if you interview people with a certain list of questions, you can come to find the culture of your organization. The results are  graphs, charts, and definitions.  At the time I was examining these products, I had no problem with them.  As time went on, I came to realize that there might be something missing.

On the surface, these methodologies seem like magic.  I could see why they sold.  But as I thought about it and reflected on my experience in corporate consulting, I came to a  conclusion…

Most of the methodologies out there right now really don’t take the whole of their organization into consideration.  They just think they do.

Interestingly enough, they claim to give the whole picture of the culture of the organization supported by their results, but I don’t believe they have been very successful up to this point .  In my opinion, this is what the current Organizational Assessments promise to show you:


 

Business As Usual

Current Presiding Cultural Viewpoint

Here’s the kicker: This view commonly says there is ONE culture that describes 100% of the organization.  That the whole of the organization has the same culture.  I’m not sure that this is an accurate depiction of what is really going on in reality.

My position on why this occurs? Most culture assessments methodologies come to this conclusion is because of four main factors:

  1. Leadership is most often interviewed in a culture assessment which creates a bias.
  2. There is no cross-pollination of information before the survey results are tallied.
  3. Employees (and people in general) are trained and conditioned to assume culture is a top-down mandate.
  4. The current dominant POV is that culture is bound by the borders of the organization.

Well, I have news…Good news for some and maybe not for others.

Leadership, while it has influence, does not a culture make.

You see, there is another reason for organizational strife, bad ROI, silos, and poor project management in general.  It is not just competition for scare organizational resources. It is also the competition between various, border-less, and unconstrained cultures vying for survival and influence.  Cultures do not stay within departments (or even organizations)! Assessing only leadership’s perception of culture does not give a clear picture of how software, process, or procedure should be implemented.  Each culture has its own way of integrating new information in the form of technological change and a more successful implementation team will understand and put this into practice.

Mama mia!  How ever will we sort through this mess?

A More Realistic Cultural Viewpoint

What we really have in an organization are nested cultural nodes, or simply “Pocket Cultures“, which are all interacting in a very complex way.  Through this interaction, the true organizational culture emerges.  Some Pocket Cultures, like personalities, can be dominant in the organization.  This does not mean the dominant culture is the correct one.  It is just the one that uses strategic means to keep its position as dominant.

So, what does all this mean?

  • Current organizational assessment tools are most likely ill-equipped to deal with a reality which takes this complex cultural interplay into consideration.
  • Executives can expect a higher adoption rate and ROI if they understand the concept of Pocket Cultures.
  • Implementation Project Managers should lead with an assessment of Pocket Cultures to find the best entry-point into the organization, giving them a much higher success rate.

Well, that’s my rant.  I can see I have a lot of work to do on this idea.  I’ll bring some of the big brains I know together to mull this over.  Firstly, I will will be working on developing an assessment which takes pocket cultures into consideration.  Everything after that is a hazy future-fog, but I bet you that it is fun out there!

For more on Organizational Types, see: Organizational Types or Wikipedia.

My Performance Review 2010 and Some Shout-Outs

December 29, 2010 2 comments

It has been almost a year since my last post.  I can only say that I had a few reasons for this:

  1. I had nothing to write about.
  2. I wanted to work on listening/reading over telling everyone my POV.
  3. Twitter is a VERY seductive mistress for someone with writer’s block…Much to easy to send out a link with short comments over writing something of substance.

Excuses, excuses…

So, I am happy to report that I survived my first year as an independent consultant.  It has been a very interesting experience that I recommend for anyone who needs a jolt back to life.  When I first started, I had some really bad habits that I needed to break.  Most notably:  Communicating in a way that people understand.  The most interesting difference for me was that there is no feedback net.  I have always loved getting feedback from my peers and managers to improve my performance.  As an independent, I found none of that and I felt myself go into a stall.  I actually got hungry for someone to tell me how to improve myself.  It is a very different experience to judged solely by the market and not having someone to say:

“Michael you need to be more/less ________.”

I came to the conclusion that I am going to have to give MYSELF a year-end review.  I have been through this process many times and never really liked the incomplete criteria by which I was judged.  In this regard, I researched some personality traits and skill sets that I would like to include in my year end assessment. Maybe in the future some of these may be included in the development profile of a 21st Century contributor.  This is in no way complete, but I would like to get it out there…Without further ado, I give you my annual performance and development review:

For the fun of it.

If you know me and disagree with any of the information above, let me know!  I would love to hear from you and it would also scratch my itch for feedback.

Thanks so much to Traits of Human Consciousness for providing an exhaustive list from which to select.

Finally, some shout-outs to the people who have played a role in my life over the last year (In no particular order…):

  • Jim Davis for his incredibly open and honest communications with me.  May your search for truth, love, and beauty be fruitful.
  • Monica Anderson for showing me an entirely different perspective on how things really work in this reality. May our friendship continue and your ideas embraced.
  • Scott Blumin for giving me a chance when no one else did.  You are a sage among men and I am honored to know you.  You have my respect and allegiance in everything we do.
  • Lang Davison for listening to my crazy rants and helping me see the possibility of a divine construction in the workplace. May all of your endeavors be blessed and love be with you everywhere you go.
  • Michael Massey for showing me that the Buddha is smiling for a reason.  May the cosmic joke tickle you pink and let all those in your presence be consumed by the glorious laughter you have given me.
  • David Foox for being a brother in dark times.  May you receive as much joy from creation as you have given me in allowing me to participate in your wonderful journey.
  • Bruce Kunkel for always confirming my beliefs with love, appreciation, and excitement.  You, my friend, live the life of a true and uncompromising artist.  May good fortune come to you and love be your guide.
  • Jason Salzetti for setting me free. If I didn’t “get it” before, I am getting it now.
  • Bernd Nernberger for participating in and promoting the crazy stuff we work on at Syntience. May your new year be filled with wonder and discovery!
  • Geoff Brown for being there at the most unexpected times.  May all of your plans come to fruition.
  • Michael Marlaire for believing in me and never forgetting to send me my NASA invites and updates.  I wish you health and happiness and I appreciate the joy you bring into every situation.
  • Michael Kenny for teaching me valuable lessons about how the world works.  You have touched my life in ways that you could not imagine.  I wish you and your family the best in the coming year.

If you are not listed here, apologies!  I will be sure to tell you how special you have made my year.

Happy New Year!!!!

We are the Music Makers and We are the Dreamers of Dreams

February 20, 2010 Leave a comment

We are the music makers,
And we are the dreamers of dreams,
Wandering by lone sea-breakers,
And sitting by desolate streams;—
World-losers and world-forsakers,
On whom the pale moon gleams:
Yet we are the movers and shakers
Of the world for ever, it seems.

Arthur O’Shaughnessy

Semantics vs. A.I. – Meetup & Debate!

February 14, 2010 Leave a comment
Semantic Web Meetup

Semantic Web Meetup

Coming up at the Hacker Dojo we have quite the interesting Meetup.  There is going to be a debate between Jeff Pollock, Monica Anderson, and Dean Allemang about the Semantic Web.  Knowing Monica personally and working on the Syntience project, I can say that this will most definitely be a heretical experience, to say the least.

Standing room only!  Here’s an expert from the site:

“Jeff Pollock – Mr. Pollock is the author of Semantic Web for Dummies and is a Senior Director with Oracle’s Fusion Middleware group, responsible for management of Oracle’s data integration product portfolio. Mr. Pollock was formerly an independent systems architect for the Defense Department, Vice President of Technology at Cerebra and Chief Technology Officer of Modulant, developing semantic middleware platforms and inference-driven SOA platforms from 2001 to 2006.

Monica Anderson – Ms. Anderson is an artificial intelligence researcher who has been considering the problem of implementing computer based cognition since college. In 2001 she moved from using AI techniques as a programmer to trying to advance the field of “Strong AI” as a researcher. She is the founder of Syntience Inc., which was established to manage funding for her exploration of this field. Syntience is currently exploring a novel algorithm for language independent document comparison and classification. She organizes the Bay Area AI Meetup group.

At the 2007 Foresight Vision Weekend Unconference, Monica Anderson presented on the prospect of developing artificial intuition in computer hardware. Further talks are currently planned for delving into the technical details of the project and also exploring the Philosophy and Epistemology to support the theory. For more information on her see: http://artificial-int…
and http://videos.syntien… or http://artificial-int…

Dean Allemang
– Dr. Allemang has a formal background, with an MSc in Mathematics from the University of Cambridge, England, and a PhD in Computer Science from The Ohio State University, USA. He was a Marshall Scholar at Trinity College, Cambridge. Dr. Allemang has taught classes in Semantic Web technologies since 2004, and has trained many users of RDF, and the Web Ontology Language OWL. He is a lecturer in the Computer Science Department of Boston University.

Dr. Allemang was also the Vice-President of Customer Applications at Synquiry Technologies, where he helped Synquiry’s customers understand how the use of semantic technologies could provide measurable benefit in their business processes. He has filed two patents on the application of graph matching algorithms to the problems of semantic information interchange. In the Technology Transfer group at Swisscom (formerly Swiss Telecom) he co-invented patented technology for high-level analysis of network switching failures. He is a co-author of the Organization Domain Modeling (ODM) method, which addresses cultural and social obstacles to semantic modeling, as well as technological ones. He currently works for Top Quadrant, recently published Semantic Web for the Working Ontologist and has the blog S is for Semantics

Syntience Launches New Website

For the new decade, we have launched a newly redesigned website. Comments are welcomed on the design.

With the new website, expect some other interesting things to come out into the open as well.

Stay tuned!

Syntience Inc.

Syntience Inc.

Google & Natural Language Processing

January 21, 2010 2 comments

So, I was going to write about unemployment and how the job market has changed, but I got scooped by an amazing article by Drake Bennett called The end of the office…and the future of work.  It is a great look into the phenomenon of Structural Unemployment.  The analysis is very timely, but can go much deeper.  Drake, if you plan on writing a book here’s your calling.  There’s lots of good stories written on this subject out there by giants such as Jeremy Rifkin, John Seely Brown, Kevin Kelly, and Marshall Brain.

While reeling from the scoop, depressed and doing some preliminary market research, I happened upon a gem of a blog post by none other than our favorite search company, Google.  Before proceeding on in my post, I do recommend that you do read the blog post by Steve Baker, Software Engineer @ Google.  I think he does an excellent job describing the problems Google is currently having and why they need such a powerful search quality team.

Here’s what I got from the Blog post:  Google, though they really want to have them, cannot have fully automated quality algorithms.  They need human intervention…And A LOT OF IT.  The question is, why?  Why does a company with all of the resources and power and money that Google has still need to hire humans to watch over search quality?  Why have they not, in all of their intelligent genius, not created a program that can do this?

Because Google might be using methods which sterilize away meaning out of the gate.

Strangely enough, it may be that Google’s core engineer’s mind is holding them back…

We can write a computer program to beat the very best human chess players, but we can’t write a program to identify objects in a photo or understand a sentence with anywhere near the precision of even a child.

This is an engineer speaking, for sure.  But I ask you:  What child do we really program?  Are children precise?  My son falls over every time he turns around too quickly…

The goal of a search engine is to return the best results for your search, and understanding language is crucial to returning the best results. A key part of this is our system for understanding synonyms.

We use many techniques to extract synonyms, that we’ve blogged about before. Our systems analyze petabytes of web documents and historical search data to build an intricate understanding of what words can mean in different contexts.

Google does this using massive dictionary-like databases.  They can only achieve this because of the sheer size and processing power of their server farms of computing devices.  Not to take away from Google’s great achievements, but Syntience’s experimental systems have been running “synthetic synonyms” since our earliest versions.  We have no dictionaries and no distributed supercomputers.

As a nomenclatural [sic] note, even obvious term variants like “pictures” (plural) and “picture” (singular) would be treated as different search terms by a dumb computer, so we also include these types of relationships within our umbrella of synonyms.

Here’s the way this works, super-simplified:  There are separate “storage containers” for “picture”, “pictures”, “pic”, “pix”, “twitpix”, etc, all in their own neat little boxes.  This separation removes the very thing Google is seeking…Meaning in their data.  That’s why their approach doesn’t seem to make much sense to me for this particular application.

The activities of an engineer would be to write code that, in a sense, tells the computer to create a new little box and put the new word in a list of associated words.  Shouldn’t the computer be able to have some sort of continuous, flowing process which allows it to break out of the little boxes and allow for some sort of free association?  Well, the answer is “Not using Google’s methods.”.

You see, Google models the data to make it easily controllable…actually for that and for many, MANY other reasons.  But by doing so, they have put themselves in an intellectually mired position.  Monica Anderson does a great analysis of this in a talk on the Syntience Site called “Models vs. Patterns”.

So, simply and if you please, rhetorically:

How can computer scientists ever expect a computer to do anything novel with data when there is someone (or some rule/code) telling them precisely what to do all the time?

Kind of constraining…I guess that’s why they always start coding at the “command line”.

Syntience Back Story…at least some of it.

January 18, 2010 1 comment

I do have an original post in the mix which talks a bit about some of the unseen things at work in the unemployment numbers being posted, but for now here’s the words of Monica Anderson talking about inventing a new kind of programming.  From Artificial Intuition:

In 1998, I had been working on industrial AI — mostly expert systems and Natural Language processing — for over a decade. And like many others, for over a decade I had been waiting for Doug Lenat’s much hyped CYC project to be released. As it happened, I was given access to CYC for several months, and was disappointed when it did not live up to my expectations. I lost faith in Symbolic Strong AI, and almost left the AI field entirely. But in 2001 I started thinking about AI from the Subsymbolic perspective. My thinking quickly solidified into a novel and plausible theory for computer based cognition based on Artificial Intuition, and I quickly decided to pursue this for the rest of my life.

In most programming situations, success means that the program performs according to a given specification. In experimental programming, you want to see what happens when you run the program.

I had, for years, been aware of a few key minority ideas that had been largely ignored by the AI mainstream and started looking for synergies among them. In order not to get sidetracked by the majority views I temporarily stopped reading books and reports about AI. I settled into a cycle of days to weeks of thought and speculation alternating with multi-day sessions of experimental programming.

I tested about 8 major variants and hundreds of minor optimizations of the algorithm and invented several ways to measure whether I was making progress. Typically, a major change would look like a step back until the system was fine-tuned, at which point the scores might reach higher than before. The repeated breaking of the score records provided a good motivation to continue.

My AI work was excluded as prior invention when I joined Google.

In late 2004 I accepted a position at Google, where I worked for two years in order to fill my coffers to enable further research. I learned a lot about how AI, if it were available, could improve Web search. Work on my own algorithms was suspended for the duration but I started reading books again and wrote a few whitepapers for internal distribution at Google. I discovered that several others had had similar ideas, individually, but nobody else seemed to have had all these ideas at once; nobody seemed to have noticed how well they fit together.

I am currently funding this project myself and have been doing that since 2001. At most, Syntience employed three paid researchers including myself plus several volunteers, but we had to cut down on salaries as our resources dwindled. Increased funding would allow me to again hire these and other researchers and would accelerate progress.