Convolutional Neural Networks (CNNs) have become the de facto technique for image feature extraction in recent years, however their design and construction remains a complicated task. As more developments are made in progressing the internal components of CNNs, the task of assembling them effectively from core components becomes even more arduous. To overcome these barriers, we propose Swarm Optimised Block Architecture (SOBA), combined with an enhanced adaptive Particle Swarm Optimisation (PSO) algorithm for deep CNN model evolution. The enhanced PSO model employs adaptive acceleration coefﬁcients generated using several cosine annealing mechanisms to overcome stagnation. Speciﬁcally, we propose a combined training and structure optimisation process for deep CNN model generation, where the proposed PSO model is utilised to explore a bespoke search space deﬁned by a simpliﬁed block-based structure. The proposed PSO model not only devises deep networks speciﬁcally for image classiﬁcation, but also builds and pre-trains models for transfer learning tasks. To signiﬁcantly reduce the hardware and computational cost of the search, the devised CNN model is optimised and trained simultaneously, using a weight sharing mechanism and a ﬁnal ﬁne-tuning process. Our system compares favourably with related research for optimised deep network generation. It achieves an error rate of 4.78% on the CIFAR-10 image classiﬁcation task, with 34 hours of combined optimisation and training, and an error rate of 25.42% on the CIFAR-100 image data set in 36 hours. All experiments were performed on a single NVIDIA GTX 1080Ti consumer GPU.