Não use loops especialmente nessa escala no RDBMS.
Tente preencher rapidamente sua tabela com 1 milhão de linhas com uma consulta
INSERT INTO `entity_versionable` (fk_entity, str1, str2, bool1, double1, date)
SELECT 1, 'a1', 100, 1, 500000, '2013-06-14 12:40:45'
FROM
(
select a.N + b.N * 10 + c.N * 100 + d.N * 1000 + e.N * 10000 + f.N * 100000 + 1 N
from (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) a
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) b
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) c
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) d
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) e
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) f
) t
Demorou na minha caixa (MacBook Pro 16 GB de RAM, 2,6 Ghz Intel Core i7) ~ 8 segundos para ser concluído
Query OK, 1000000 rows affected (7.63 sec) Records: 1000000 Duplicates: 0 Warnings: 0
ATUALIZAÇÃO1 Agora, uma versão de um procedimento armazenado que usa uma instrução preparada
DELIMITER $$
CREATE PROCEDURE `inputRowsNoRandom`(IN NumRows INT)
BEGIN
DECLARE i INT DEFAULT 0;
PREPARE stmt
FROM 'INSERT INTO `entity_versionable` (fk_entity, str1, str2, bool1, double1, date)
VALUES(?, ?, ?, ?, ?, ?)';
SET @v1 = 1, @v2 = 'a1', @v3 = 100, @v4 = 1, @v5 = 500000, @v6 = '2013-06-14 12:40:45';
WHILE i < NumRows DO
EXECUTE stmt USING @v1, @v2, @v3, @v4, @v5, @v6;
SET i = i + 1;
END WHILE;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Concluído em ~ 3 min:
mysql> CALL inputRowsNoRandom(1000000); Query OK, 0 rows affected (2 min 51.57 sec)
Sinta a diferença 8 segundos vs 3 minutos
ATUALIZAÇÃO2 Para acelerar as coisas, podemos usar explicitamente transações e inserções de commit em lotes. Então aqui vai uma versão melhorada do SP.
DELIMITER $$
CREATE PROCEDURE inputRowsNoRandom1(IN NumRows BIGINT, IN BatchSize INT)
BEGIN
DECLARE i INT DEFAULT 0;
PREPARE stmt
FROM 'INSERT INTO `entity_versionable` (fk_entity, str1, str2, bool1, double1, date)
VALUES(?, ?, ?, ?, ?, ?)';
SET @v1 = 1, @v2 = 'a1', @v3 = 100, @v4 = 1, @v5 = 500000, @v6 = '2013-06-14 12:40:45';
START TRANSACTION;
WHILE i < NumRows DO
EXECUTE stmt USING @v1, @v2, @v3, @v4, @v5, @v6;
SET i = i + 1;
IF i % BatchSize = 0 THEN
COMMIT;
START TRANSACTION;
END IF;
END WHILE;
COMMIT;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
Resultados com diferentes tamanhos de lote:
mysql> CALL inputRowsNoRandom1(1000000,1000); Query OK, 0 rows affected (27.25 sec) mysql> CALL inputRowsNoRandom1(1000000,10000); Query OK, 0 rows affected (26.76 sec) mysql> CALL inputRowsNoRandom1(1000000,100000); Query OK, 0 rows affected (26.43 sec)
Você mesmo vê a diferença . Ainda> 3 vezes pior que a junção cruzada.