From novels such as Isaac Asimov's I, Robot to modern video games like Horizon: Zero Dawn, sci-fi has long imagined what would happen if AI broke free of human control.
Now, according to the report, the "worst-case scenario" of humans losing control of advanced AI systems is "taken seriously by many experts".
AI models are increasingly exhibiting some of the capabilities required to self-replicate across the internet, controlled lab tests suggested.
AISI examined whether models could carry out simple versions of tasks needed in the early stages of self-replication - such as "passing know-your customer checks required to access financial services" in order to successfully purchase the computing on which their copies would run.
But the research found to be able to do this in the real world, AI systems would need to complete several such actions in sequence "while remaining undetected", something its research suggests they currently lack the capacity to do.
Institute experts also looked at the possibility of models "sandbagging" - or strategically hiding their true capabilities from testers.
They found tests showed it was possible, but there was no evidence of this type of subterfuge taking place.
In May, AI firm Anthropic released a controversial report which described how an AI model was capable of seemingly blackmail-like behaviour if it thought its "self-preservation" was threatened.
The threat from rogue AI is, however, a source of profound disagreement among leading researchers - many of whom feel it is exaggerated.