When to Use NAS vs Manual Architecture Design
NAS is powerful but expensive. Manual design is cheaper but limited. Choosing between them depends on your constraints and the maturity of your problem domain.
When NAS Makes Sense
You have a large model fleet with strict efficiency requirements. A 1% accuracy improvement or 10% latency reduction multiplied across millions of daily predictions justifies thousands of GPU hours for search.
You are exploring a new domain where existing architectures do not transfer well. When expert intuition fails, systematic search can find non-obvious designs that humans would not consider.
You need device-specific architectures. Deploying to 10 different hardware targets manually means designing 10 architectures. NAS can search for all 10 in parallel with shared infrastructure.
When Manual Design Wins
Your task has well-established architectures. For standard image classification, pre-trained models from established architectures often beat NAS-found architectures of similar size because they have been tuned extensively.
You lack compute budget. A minimal NAS run costs hundreds of GPU hours. If your total model development budget is under 1000 GPU hours, spend it on hyperparameter tuning and data quality instead.
Rule of thumb: If you would deploy the model to fewer than 10 million predictions per day, the ROI of NAS rarely justifies the search cost. Start with established architectures.