INTRODUCTION: With the rapid proliferation of artificial intelligence (AI) tools, important questions about their applicability to manuscript preparation have been raised. This study explores the methodological challenges of detecting AI-generated content in neurosurgical publications, using existing detection tools to highlight both the presence of AI content and the fundamental limitations of current detection approaches. METHODS: We analyzed 100 randomly selected manuscripts published between 2023 and 2024 in high-impact neurosurgery journals using a two-tiered approach to identify potential AI-generated text. The text was classified as AI-generated if both a robustly optimized bidirectional encoder representations from transformers pretraining approach (RoBERTa)-based AI classification tool yielded a positive classification and the text's perplexity score was less than 100. Chi-square tests were conducted to assess differences in the prevalence of AI-generated text across various manuscript sections, topics, and types. In an effort to eliminate bias introduced by the more structured nature of abstracts, a subgroup analysis was conducted that excluded abstracts as well. RESULTS: Approximately one in five (20%) manuscripts contained sections flagged as AI-generated. Abstracts and methods sections were disproportionately identified. After excluding abstracts, the association between section type and AI-generated content was no longer statistically significant. CONCLUSION: Our findings highlight both the increasing integration of AI in manuscript preparation and a critical challenge in academic publishing as AI language models become increasingly sophisticated and traditional detection methods become less reliable. This suggests the need to shift focus from detection to transparency, emphasizing the development of clear disclosure policies and ethical guidelines for AI use in academic writing.