Archive

Posts Tagged ‘Unlearning’

We are the Music Makers and We are the Dreamers of Dreams

February 20, 2010 Leave a comment

We are the music makers,
And we are the dreamers of dreams,
Wandering by lone sea-breakers,
And sitting by desolate streams;—
World-losers and world-forsakers,
On whom the pale moon gleams:
Yet we are the movers and shakers
Of the world for ever, it seems.

Arthur O’Shaughnessy

Google & Natural Language Processing

January 21, 2010 2 comments

So, I was going to write about unemployment and how the job market has changed, but I got scooped by an amazing article by Drake Bennett called The end of the office…and the future of work.  It is a great look into the phenomenon of Structural Unemployment.  The analysis is very timely, but can go much deeper.  Drake, if you plan on writing a book here’s your calling.  There’s lots of good stories written on this subject out there by giants such as Jeremy Rifkin, John Seely Brown, Kevin Kelly, and Marshall Brain.

While reeling from the scoop, depressed and doing some preliminary market research, I happened upon a gem of a blog post by none other than our favorite search company, Google.  Before proceeding on in my post, I do recommend that you do read the blog post by Steve Baker, Software Engineer @ Google.  I think he does an excellent job describing the problems Google is currently having and why they need such a powerful search quality team.

Here’s what I got from the Blog post:  Google, though they really want to have them, cannot have fully automated quality algorithms.  They need human intervention…And A LOT OF IT.  The question is, why?  Why does a company with all of the resources and power and money that Google has still need to hire humans to watch over search quality?  Why have they not, in all of their intelligent genius, not created a program that can do this?

Because Google might be using methods which sterilize away meaning out of the gate.

Strangely enough, it may be that Google’s core engineer’s mind is holding them back…

We can write a computer program to beat the very best human chess players, but we can’t write a program to identify objects in a photo or understand a sentence with anywhere near the precision of even a child.

This is an engineer speaking, for sure.  But I ask you:  What child do we really program?  Are children precise?  My son falls over every time he turns around too quickly…

The goal of a search engine is to return the best results for your search, and understanding language is crucial to returning the best results. A key part of this is our system for understanding synonyms.

We use many techniques to extract synonyms, that we’ve blogged about before. Our systems analyze petabytes of web documents and historical search data to build an intricate understanding of what words can mean in different contexts.

Google does this using massive dictionary-like databases.  They can only achieve this because of the sheer size and processing power of their server farms of computing devices.  Not to take away from Google’s great achievements, but Syntience’s experimental systems have been running “synthetic synonyms” since our earliest versions.  We have no dictionaries and no distributed supercomputers.

As a nomenclatural [sic] note, even obvious term variants like “pictures” (plural) and “picture” (singular) would be treated as different search terms by a dumb computer, so we also include these types of relationships within our umbrella of synonyms.

Here’s the way this works, super-simplified:  There are separate “storage containers” for “picture”, “pictures”, “pic”, “pix”, “twitpix”, etc, all in their own neat little boxes.  This separation removes the very thing Google is seeking…Meaning in their data.  That’s why their approach doesn’t seem to make much sense to me for this particular application.

The activities of an engineer would be to write code that, in a sense, tells the computer to create a new little box and put the new word in a list of associated words.  Shouldn’t the computer be able to have some sort of continuous, flowing process which allows it to break out of the little boxes and allow for some sort of free association?  Well, the answer is “Not using Google’s methods.”.

You see, Google models the data to make it easily controllable…actually for that and for many, MANY other reasons.  But by doing so, they have put themselves in an intellectually mired position.  Monica Anderson does a great analysis of this in a talk on the Syntience Site called “Models vs. Patterns”.

So, simply and if you please, rhetorically:

How can computer scientists ever expect a computer to do anything novel with data when there is someone (or some rule/code) telling them precisely what to do all the time?

Kind of constraining…I guess that’s why they always start coding at the “command line”.

Syntience Back Story…at least some of it.

January 18, 2010 1 comment

I do have an original post in the mix which talks a bit about some of the unseen things at work in the unemployment numbers being posted, but for now here’s the words of Monica Anderson talking about inventing a new kind of programming.  From Artificial Intuition:

In 1998, I had been working on industrial AI — mostly expert systems and Natural Language processing — for over a decade. And like many others, for over a decade I had been waiting for Doug Lenat’s much hyped CYC project to be released. As it happened, I was given access to CYC for several months, and was disappointed when it did not live up to my expectations. I lost faith in Symbolic Strong AI, and almost left the AI field entirely. But in 2001 I started thinking about AI from the Subsymbolic perspective. My thinking quickly solidified into a novel and plausible theory for computer based cognition based on Artificial Intuition, and I quickly decided to pursue this for the rest of my life.

In most programming situations, success means that the program performs according to a given specification. In experimental programming, you want to see what happens when you run the program.

I had, for years, been aware of a few key minority ideas that had been largely ignored by the AI mainstream and started looking for synergies among them. In order not to get sidetracked by the majority views I temporarily stopped reading books and reports about AI. I settled into a cycle of days to weeks of thought and speculation alternating with multi-day sessions of experimental programming.

I tested about 8 major variants and hundreds of minor optimizations of the algorithm and invented several ways to measure whether I was making progress. Typically, a major change would look like a step back until the system was fine-tuned, at which point the scores might reach higher than before. The repeated breaking of the score records provided a good motivation to continue.

My AI work was excluded as prior invention when I joined Google.

In late 2004 I accepted a position at Google, where I worked for two years in order to fill my coffers to enable further research. I learned a lot about how AI, if it were available, could improve Web search. Work on my own algorithms was suspended for the duration but I started reading books again and wrote a few whitepapers for internal distribution at Google. I discovered that several others had had similar ideas, individually, but nobody else seemed to have had all these ideas at once; nobody seemed to have noticed how well they fit together.

I am currently funding this project myself and have been doing that since 2001. At most, Syntience employed three paid researchers including myself plus several volunteers, but we had to cut down on salaries as our resources dwindled. Increased funding would allow me to again hire these and other researchers and would accelerate progress.

The End of Shareholder Value

This is an older article from Financial Times“Welch condemns share price focus”

I thought it was such a monumental article, I bookmarked it and go back to read it now and again.  Well, I just read it again and it had no less of an impact on my outlook.  I still find it shocking how under-reported this announcement  was when it hit the internet.  A couple of brave souls in the MSM tried.  I saw them try, but no one wanted to pick this story up.  Talk about putting your fingers in your ears and saying, “lalalalalala!”.

Dr. John Francis Jack Welch, Jr., PhD (born November 19, 1935(1935-11-19)) is the former Chairman and CEO of General Electric between 1981 and 2001. Welch gained a solid reputation for uncanny business acumen and unique leadership strategies at GE. He remains a highly regarded figure in business circles due to his innovative management strategies and leadership style. - via Wikipedia

Dr. John Francis "Jack" Welch, Jr., PhD is the former Chairman and CEO of General Electric between 1981 and 2001. Welch gained a solid reputation for uncanny business acumen and unique leadership strategies at GE. He remains a highly regarded figure in business circles due to his innovative management strategies and leadership style. - via Wikipedia

Could this be denial or something deeper?  I like to give people the benefit of the doubt.  Sometimes I see things that seem to defy reason and  I tell myself, “Oh, they didn’t mean to…”.  Then sometimes I think it may in fact me living in denial for thinking such a thing.

This is the best line in the article:

“Jack Welch, who is regarded as the father of the “shareholder value” movement that has dominated the corporate world for more than 20 years, has said it was “a dumb idea” for executives to focus so heavily on quarterly profits and share price gains.”

Many large scale consulting firms made TONS of money from the “Shareholder Value” movement.  They made posters of the models they developed, splashed some blue and red on it for creativity’s sake, and hung them on walls as big as a movie screen .  They sold it as an innovative strategic framework on the word of what one dude said in passing.  Now, what did they do when Jack said:

“Shareholder value is a result, not a strategy . . . Your main constituencies are your employees, your customers and your products.”

Nothing…Or at least nothing I could see.  I watched the posters stay up at the consulting firms and the world march ahead as if Jack never said anything.  As time progressed, my amazement turned to sorrow.  The hypocrisy was staggering…And I was a part of it.  What could they say?  Maybe: “Oops!  Looks like a lot of the last 20 years was built on fantasy and trickery, but its our money now so too bad, so sad.”

The positive spin and the learning opportunity I received was just as profound as my realization of hypocrisy.  What I learned from Jack is that he is reinforcing something that I recently discovered on my own in the face of all this craziness:  Be passionate about what you love, may be it a product, your service, or your people.  The money will follow as a result of the intrinsic beauty of your art.

Mainly, because the intent of your actions will be  infused with your creativity and flowing joy.

Catastrophic Forgetting

October 14, 2009 5 comments

Uncertainty…If one thing is for certain, it is that we are in a time of great uncertainty.

To comfort us in this period, many institutions are taking a stab at what we should do to live in uncertain times.  For example, this morning we were somewhat blessed with the wisdom of the long-held masters of consulting innovation, The McKinsey Quarterly.  Their attempt at an action plan for coping with our crazy times was put forth in their article, “How managers should approach a fragile economy”. Of course, this article immediately caught my attention.

While I was somewhat satisfied with their assessment of the situation (3/4 of the article), I was majorly disappointed by their solutions (1/4 of the article).  Generally, I would assume that an article titled “How to…” would give a substantive action plan instead of purporting executive solutions.  Hmmm…I guess it is ironic that McKinsey did not include market and media misdirection as part of their analysis.  *sigh* So much for a well-informed citizenry.

So, what exactly chapped my cabbage about McKinsey’s “solutions”?  Let’s take a look at some of the direct quotations:

“Thoughtful economic modeling can start to capture all this complexity.”

McKinsey is going to explain it to us by modeling the complexity of the market.  Well, it is becoming clearer and clearer to many on the edge of research that “all this complexity” cannot be modeled using traditional methods.  I’m not going to go into this in depth due to the esoteric nature of the subject.  It requires letting go of some pretty deeply held assumptions about how the world really works.  If you’re interested in digging deeply into this subject, I invite you to contact me via email.  For this article, I’ll just say that you’d expect we learned something from the way we were modeling risk in the market and to what ends it led.

“…they must drop the pretense that they can predict the future.”

Ummm…What are the models you use in the above supposed to do?  Physician, heal thyself!  Models are meant to create a tool which allows for predictable outcomes.  In a sense, McKinsey is saying:  “Hey, stop relying on your projections because they are useless due to the assumptions inherent in the data, but we don’t have to because we smaarrrt.  You duuuumb.  Buy our product!”  Here’s the bottom line folks:  McKinsey knows about as much as you do about what is happening.

“…they must continue adapting their management processes and capabilities with an eye to making better decisions under uncertainty.”

We did this back in the 80’s and 90’s…It was called BPR.  By adding the words uncertainty to the end, are we going to magically enlighten ourselves?  Institutional innovation goes much deeper than this shallow interpretation of the matter.  When I talk about real solutions in later posts, I’ll talk about some of the mind power behind the shifting that is occurring.

“…building greater flexibility into strategic activity by putting a greater focus on acquiring options, contingency planning, and the use of stage-gating techniques for committing resources.”

How do you plan for contingencies without predicting the future?  Stage-gating is an old school control-oriented mentality. Not allowing innovation to happen at the lowest levels of the organization can handcuff organizations to their old business models.  Rigidity of old needs to give way to real flexibility.  What is real flexibility?  Stay tuned.

So, I hate to say it, but McKinsey offers no answers here.  They are just as uncertain as the rest of us.  It is quite apparent in the confusion underlying this article.  They in effect telling executives to do a headstand instead of a handstand. That will get us through this.

Nope.  Time to walk on our own two feet.