Better handling for manually created source files and compiler APIs
See original GitHub issueSearch Terms
type checker source file cannot ready property ‘members’ of undefined
Suggestion
When a user of the compiler API manually creates a source file with ts.createSourceFile
instead of retrieving it from their program
, and then asks the program
for type inference on a contained node, this can crash. See https://github.com/Microsoft/TypeScript/issues/8136 / https://github.com/vuejs/vue-cli/issues/2712 / https://github.com/angular/tsickle/issues/151 / https://github.com/general-language-syntax/TS-GLS/issues/39.
It seems like one of three interpretations might be best:
- This is explicitly unsupported behavior, but for the sake of performance & simplicity, no checks should happen
- This is explicitly unsupported behavior, and a more explicit error should be thrown
- This should become supported behavior, and the program should dynamically create source files as requested
Use Cases
Auto-generated TypeScript files, such as .vue
snippets, still want access to a type checker, such as for TSLint rules.
Examples
https://github.com/palantir/tslint/issues/4273
Checklist
My suggestion meets these guidelines:
- This wouldn’t be a breaking change in existing TypeScript / JavaScript code
- This wouldn’t change the runtime behavior of existing JavaScript code
- This could be implemented without emitting different JS based on the types of the expressions
- This isn’t a runtime feature (e.g. new expression-level syntax)
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (5 by maintainers)
Top GitHub Comments
While i am not too familar with typescript internals, the fork-ts-checker-webpack-plugin, somehow managed to allow typechecking for .vue file: https://github.com/Realytics/fork-ts-checker-webpack-plugin/blob/master/src/VueProgram.ts
I’d say so, yes.
Setting it at the same time we refresh and set
.parent
pointers in the parser/binder should be relatively cheap, since those need to be rewritten in the same way.I’d be temped to tie it to the
diagnostics
flag - but it’s probably best to only tie it to a flag if it has measurable perf impact.We have some internal tools for informally measuring perf changes of some samples we have (and analyzing the resulting perf traces) - unfortunately we’ve never made them public. The closest thing we use in public is the aggregate timing data collected by the
--extendedDiagnostics
flag, but that’s only fortsc
. For measuring, eg, the impact on LS operations and incremental parse, we don’t have anything anywhere (at least that I know of). We store execution time of tests when they’re run in parallel so we can more optimally schedule them on future runs, so that tracks, eg,fourslash
test duration. You could maybe shoehorn that into measuring a fourslash test that edits and rechecks a file a bunch of times (or just enable and use thets.performance
timers on fourslash tests and add a new verifier/baseliner)… but yeah - we have no formal infra for measuring LS perf generally right now. When a perf issues is reported, we usually just get a repro and then open the build or LS in the chrome devtools and run a performance trace - that’s not great for trying to benchmark a LS change, though.