Better yet how about an answer from a philosopher mathematician...
Key phrase in that article is that the notion of the technological singularity hinges on the notion of machines "surpassing human intellect."
That is about as vague as a notion can get. Just what is considered human intellect, and how is exceeding it supposed to allow machines to suddenly adopt unforeseen methods of improving themselves?
This is no futuristic sci-fi anymore, since computers already have exceeded "human intellect" in many respects, for instance, at the venerated benchmark of chess. In the early '90's Garry Gasparov, the greatest grand master probably ever, predicted no computer could ever beat either him or Anatoly Karpov. Several years later he was soundly thrashed by an IBM supercomputer that is pedestrian by today's standards.
And of course computers have long "exceeded human intellect" at numerical calculation. Imagine where you'd have been yesterday, watching the election without your trusty PC to give you six different sites full of numbers, and the TV networks all getting the updated numbers constantly and flashing them on the screen. No "human intellect" could have brought you all that data or arranged it so nicely for you to view.
Besides this the article doesn't even begin to consider just what other things besides "intellect" would be required for a machine to become unexpectedly self-perpetuating. Programming it to doesn't count. That rules out AI completely, which is entirely a matter of programming, and is predictable at least in terms of probabilities if not certainties. If you play video games like I do, you know the AI has not been made that can think creatively, and likely never will be made. And the reason why is... drumroll...
>>>Machines cannot think creatively because they are NOT ALIVE.<<<
Allowing machines to think creatively as opposed to AI will require a much greater understanding of just what makes living things alive, how they are different from machines, and trying to impart the quality to machines. Two big ways they differ:
1) everything that is alive evolved from primitive beginnings over time; machines are manufactured.
2) everything that is alive seeks to remain alive, at least until it has reproduced, with few exceptions.
living intelligence is not controlled by itself per se but by the organism, the will, volition, etc. In most "thinking" beings with central nervous systems, by far the majority of the system is used for functions that don't resemble "intellectual" things. Such as, maybe, will. Or consciousness. Or "I want a bologna sandwich." Or playing tennis.
Show me the robot that can even BEGIN to play tennis, and I'll show you a robot worth many billions of dollars to robotics and computer experts who would want to know how it ticks. Robots can't be made by computers to even walk in a human manner, let alone move with agility. Nobody knows how we do it, but I can assure you we don't think about it first.
If some computer a la the Terminator or Matrix were to gain some kind of a will and try to take things over, it would have a devil of a time making robots that could beat humans in battle. A technological takeover is no more likely than a room full of disembodied brains taking over. Tech can crunch data but that's about it.