Cross-dataset testing is critical for examining machine learning (ML) model's performance. However, most studies on modelling transcriptomic and clinical data only conducted intra-dataset testing. It is also unclear whether normalization and non-differentially expressed genes (NDEG) can improve cross-dataset modeling performance of ML. We thus aim to understand whether normalization, NDEG and data source are associated with performance of ML in cross-dataset testing. The transcriptomic and clinical data shared by the lung adenocarcinoma cases in TCGA and ONCOSG were used. The best cross-dataset ML performance was reached using transcriptomic data alone and statistically better than those using transcriptomic and clinical data. The best balance accuracy, area under curve and accuracy were significantly better in ML algorithms training on TCGA and tested on ONCOSG than those trained on ONCOSG and tested on TCGA (p<0.05 for all). Normalization and NDEG greatly improved intra-dataset ML performances in both datasets, but not in cross-dataset testing. Strikingly, modelling transcriptomic data of ONCOSG alone outperformed modelling transcriptomic and clinical data whereas including clinical data in TCGA did not significantly impact ML performance, suggesting limited clinical data value or an overwhelming influence of transcriptomic data in TCGA. Performance gains in intra-dataset testing were more pronounced for ML models trained on ONCOSG than TCGA. Among the six ML models compared, Support vector machine was the most frequent best-performer in both intra-dataset and cross-dataset testing. Therefore, our data show data source, normalization and NDEG are associated with intra-dataset and cross-dataset ML performance in modelling transcriptomic and clinical data.