In his 1993 essay"The Coming Technological Singularity: How to Survive in the Post-Human Era" Vernor Vinge predicted that once humans invented intelligent machines (or other super-intelligences) superior to humans we'd enter a post-human era where the key players where not man but machine. Part of his argument is that even if we wanted to prevent super-intelligences from taking over the economic incentives to creating one would be so great that sooner or later some one would build a machine capable of taking over everything.
I think the main flaw in this is the idea is that it's not necessarily the case that anything smart enough to take over man's fate would desire to do so. Presumably the super-intelligences would only desire to do what we programmed them to desire. And it's not clear that the desire to propagate and spread is something you'd accidentally end up instilling in your creation.
Though the stray death-cult might create a machine willing to multiply itself endlessly across the universe, you'd assume that most organizations would confine themselves to creating machines that focused on, you know, figuring stuff out for us. The only real danger is that a poorly configured machine would accidentally desire to propagate itself. Presumably it would be contained by all the other super intelligent computers that would probably be instilled with a desire to avoid having computers take over the universe.