Criticism that no model weights or compiler tools were released, and lack of performance benchmarks against baseline approaches limits reproducibility and evaluation.
← Back to Executing programs inside transformers with exponentially faster inference
Critics argue that the project is virtually unusable without released weights or compiler tools, hindering the very low-budget experimentation the system claims to support. A significant portion of the debate centers on the report's presentation, with some commenters dismissing it as "repetitive AI fluff" that uses a salesman-like tone to mask a lack of empirical data. Others contend that blaming AI for the missing benchmarks is a distraction, suggesting that the omission of results is a deliberate choice by the authors rather than a byproduct of their writing tools. Ultimately, while the neurosymbolic approach holds some interest, the community finds it difficult to evaluate the project's merits without the transparency of reproducible benchmarks.
3 comments tagged with this topic