To dealing with the large-scale image datasets with massive amount of unstructured photos, which were different in scale, view point, etc., a robust dense 3D reconstruction method was proposed. Firstly, large scale variants were considered during the neighboring image selection stage, which was proofed to be more robust for large-scale image datasets. Then, a robust depth map computing method was proposed based on the combination of the DAISY descriptor and PatchMatch information propagation scheme. Regularization was imposed by enforcing the depth consistency among different depth maps during the merging stage. Finally, redundancy 3D points were removed by using the accuracy of each points. The method could be parallelized at the image level naturally, which makes it extremely suitable for large-scale 3D reconstruction. Experimental results confirmed the performance of the proposed method, which has both high accuracy and completeness.