question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Question] Do people successfully use this?

See original GitHub issue

This may sound like a rant but that’s not what’s intended. I’d actually like to know if people successfully use this tool to generate code that runs in production because my experiences are pretty bad so far.

The first issue for me was: okay, there’s swagger codegen that has 2.5k issues and then there’s this fork which also has 1.7k issues. That’s a huge warning sign already. But look, there’s a ton of generators and things it does so that might be okay. But which one do I use now? I tried both and the results were the same.

I haven’t tried the other generators. Maybe I’m the only one trying to generate client code for java and resttemplate. I have a bunch of issues. Unclear docs, Syntax errors in the generated gradle, missing dependencies in the generated build.gradle, array with anyOf is not supported - I have to manually add a parent type and let the subtypes extend this, I have to explicitly set openApiNullable to true in the gradle plugin in order to have the dependency added although it defaults to true (according to the docs). And after all of this I still cannot use it because some nullable field fails in my spring app

Cannot construct instance of `org.openapitools.jackson.nullable.JsonNullable`

despite the config being set up.

Again, please don’t get me wrong. I’m sure that lots of effort has gone into both projects. Maybe I’m just doing something wrong? Although OAS 3.0.3 is from February the original 3.0.0 is quite dated. I can’t imagine that the generator just does not work - I mean, it’s version 5 already.

For my current project my idea was to go spec first and have the client generated with a single button click. OAS isn’t new anymore so the tooling should be good by now. But now I’ve invested some days without success and I guess I’ll have to fall back to manually annotating my POJOs and stitching stuff together with Restteample or Retrofit myself.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:24
  • Comments:20 (8 by maintainers)

github_iconTop GitHub Comments

16reactions
jimschubertcommented, Sep 26, 2020

@black-snow those are some great questions, and excellent feedback. I know I’m a maintainer, so my feedback is heavily biased and one might consider “advanced usage”. I’ll explain my use cases and dive deeper into your concerns.

My production experiences

When I was at Expedia, I used swagger-codegen generated code and later openapi-generator generated code (after we forked the project) to generate a Scala Finatra client in a Spark Structured Streaming application. This was a custom generator because our project doesn’t include a Finatra output, and we had internal observability tools which I had to integrate into the code.

I also used the JavaScript node.js client generator to generate contract tests which we ran in the backend API pipelines for my team on every build.

Just before I left Expedia, I’d written a custom generator to generate some pact tests with a goal to replace the node.js contract tests. There was nothing wrong with the tests, my team just wasn’t comfortable with JavaScript-based tests so we were moving to JSON-based tests (I know… don’t ask me).

When I started using the generator, my team had 7 APIs which totaled around 40 endpoints and maybe 200 models. When I left there were only 3 APIs but 2-3x the endpoints and models.

The Scala Finatra client code was running in production, and while the other code was test code it was integral in getting other code to production. We didn’t use generators to generate our server code because: 1) our codebase had a more complex architecture than the standard controller/service type of output you’ll find in our generators 2) we were not following spec-first development.

My current employer uses OpenAPI documentation for spec-first development across all APIs - of which there are 100s or maybe 1000s. I plan to work with an internal team to integrate OpenAPI Generator into some of our internal tooling to simplify client/server/doc generation for the entire company.

Are others using it?

Absolutely.

Also, as @agilob points out, it’s best to have a ton of issues because it shows there are a ton of users. It also means we’re not just randomly closing issues because we don’t hear back from people for a couple weeks. The issues are the project’s backlog. Just think what would happen if you went to work tomorrow and started deleting hundreds of Jira tickets because your Project Manager hadn’t commented on them in two weeks. I could never understand how projects mark issues as “stale”; something is either an issue or it’s not. Just like we can’t wish away Coronavirus, we can’t wish away the technical debt in open source.

There’s another reason why we have so many issues, though. Many of them are already fixed as a matter of course and we sometimes don’t close associated issues until a release has been completed. Many of our issues are duplicates where people don’t realize that one or more issues are describing the same thing. Some of the issues don’t follow our issue reporting guidelines (no sample inputs, no expectations, no generator version). More still are a result of the user trying to generate against an invalid or incomplete spec document, which is not something we support. Interacting with all of those issues takes time. In fact, at one point our core team was collectively spending a few hours a week just labeling issues and pull requests. This drove me to write a labeler github app to reduce our maintenance overhead. That worked, but since it’s a hosted GitHub app, it came with its own maintenance overhead… so I wrote a labeler GitHub action instead.

Yet another reason why there are so many issues is time. Everyone who contributes to this project does so in their free time. We have to manage our time between new code contributions, reviewing pull requests, and community engagement. I’d read somewhere that more than half of open source contributors spend 5 hours or less a week on open source. I usually average closer to 10 hours a week. I think a project this size with the amount of contributions made and merged on a weekly basis would be considered “very active” in open source.

Your linked concerns

Your linked concerns are very valid. I have been working for over a year on Project 5: Simplify Contributions and Improve Usability specifically to ease the pain that newcomers feel when using OpenAPI Generator. When I first began contributing to Swagger Codegen 4-5 years ago, I really enjoyed it because I considered it a challenge to dig in and understand how everything worked. But, that’s not a good user experience if all your users have to do the same. This is why the first thing I did after the fork was to create a roadmap which helped us be a little more transparent about where we intended to go with the project.

I’ve also spent maybe 100 hours or so organizing and creating developer-facing documentation at https://openapi-generator.tech/ which explains usages and provides examples for using templates and fully customizing generators.

It’s important for folks to read these docs, and they can always use some additional love. Unfortunately, most developers don’t really like writing docs (myself included). And since this is a community-driven project we really do rely on contributions from the community to help improve things for the community.

I hope you don’t mind if I use you as an example related to documentation… I mean no offense it just shows the issues around open source and our reliance on community. You’d referenced our docs in #7479, which you begin with:

Couldn’t find a repo for the gradle plugin so I’m posting this here.

The gradle section of the docs ends with:

For full details of all options, see the plugin README.

See below:

image

The plugin README links directly to the readme within this repo from which the gradle plugin is built.

That readme lists all available options. The configOptions you asked about are documented as:

A map of options specific to a generator. To see the full list of generator-specified parameters, please refer to generators docs

I’ll admit that I don’t consider myself a technical writer but I do try to make things as clear as possible when communicating concepts. Unfortunately, I have “expert bias” as a core contributor so it’s often difficult to write docs as if I’m a first time user. We rely on issues and bug reports from users to improve these. When a user raises a concern like yours, it indicates to me that the README link for the plugin probably also needs to be included closer to the first example. As another example, your question about why some configs are not nested within configOptions is a concern I’ve raised in the past but which had no real feedback from the community as a perceived problem, so it hasn’t had priority.

Your comments about “doing something wrong”

You’re not doing anything wrong. I think you are approaching code generation tooling as if it’ll output production-ready code which will fully suit your needs, and that this approach is frustrating you. That assumption is sometimes not the case with code generation tools which are bound to dynamic user-defined inputs.

In many cases, our clients will just work right out of the box for maybe 95% of the use cases (see our ruby, kotlin, C#, and aspnetcore generators, for example). In some instances, the code may require template customization as others have mentioned above or it may require workarounds or older versions.

In your case, specifically, I think you’ve experienced frustration because you’re targeting Gradle builds and a generator which isn’t as active as others. We have an extensive regression testing suite. While we can’t possibly test all the combinations of code, we do run full compilation tests on:

  • 73 generated outputs in circle ci
  • 19 generated outputs in shippable
  • ~10 generated outputs across other ci vendor instances

So, for every build we run maybe 200 minutes of builds, tests, and integration tests which verify outputs of 100+ generated sources.

But as many of our JVM languages output Maven POM files and our project uses Maven, when we execute these samples in our CI it will execute the Maven POM in the sample. This allows for some bugs to slip by in gradle build files. Unfortunately for us, these often go unnoticed because people are probably not even relying on our build outputs in the first place (i.e. generating models/apis/docs into an existing project).

My recommendation would be if you’re trying one of the generated outputs and Gradle fails, compare it against the Maven POM (or even attempt the build with Maven). If it still fails, then it’s a bug. We will often fix the bug and add the generator to the list of compilation checks with the other 100 or so generated outputs.

But I want to be clear - and this echos what others have said - you should expect to do customization.

Customization

The thing that sets our tool apart from others is the customization. Our built-in templates are all mustache. We support handlebars as well. We also support custom template engines.

If templating doesn’t suit your needs, and this can happen if the data bound to templates doesn’t quite match your expectations, you can write a custom generator.

Until within the last couple of weeks, you were limited to generating only those templates which are known to the generator at compile time. For version 5.0.0 we’ll be able to define completely new templates externally via configuration files like those we use for samples; this will be available for built-in and custom generators.

Getting Help

I hope my response was informative and answered most of your questions.

You can always reach out to our Slack channel if you get stuck. There are almost 900 users in our general chat and people often get responses from users. You’re also likely to get quick responses from me on there as well. Someone could at least guide you in the right direction if you’re beginning to feel frustrated by any part of the tooling.

10reactions
toby-murray-snow-softwarecommented, Sep 23, 2020

As an anecdote, I can say that the Cloud Management team at Snow Software uses OpenAPI specs in production and has been for a couple years. We have one spec that’s ~21k lines, another that’s ~2k lines, and we are moving forward with more.

That said, in total we use ~4 or 5 versions between the Swagger generator and the OpenAPITools generator because almost all of them have at least one bug that impacts us. We generate Java clients, Java server stubs, and TypeScript clients.

You should expect to be overriding some of the templates to suit your own purposes. Even if it’s just to keep dependencies more up to date than this project does, this is not a “run this jar and forget about it” project in my experience.

My personal opinions

  • your exposure mirrors my own, expect to run into frequent bugs that mean things are fundamentally broken out of the box
  • the documentation is overwhelming and often confusing
  • the team here has been responsive to both bugs and PRs, so in all likelihood you can fix issues you run into
  • the monorepo structure makes everything more confusing for a user. All the generators and the plugins and everything sharing one repo mean the documentation and issues also have to span everything and searching for issues for a specific use case as a user (e.g. things that impact OkHTTP Java client generation) is more difficult. It also makes it more difficult to contribute.
  • so far, we have found no better alternative. It is better to deal with the bugs from this project than e.g. to have developers writing API model objects and inevitably screwing things up or doing things they’re not supposed to. After initial set up, use of this project has introduced a healthy friction with API development that enables us to maintain a decent API with many contributors.
  • OpenAPI v2 and OpenAPI v3 is a bit of a mess that you’ll have to wade through, go with v3 if you’re able to.
Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Ask Great Questions - Harvard Business Review
Asking questions is a uniquely powerful tool for unlocking value in organizations: It spurs learning and the exchange of ideas, it fuels innovation...
Read more >
How To Respond to Questions Effectively | Indeed.com
In this video, we offer 4 key strategies to answering tough questions. These strategies come with examples, explanations and more on leaving a ......
Read more >
50+ Top Interview Questions and Answers in 2022 | The Muse
We've compiled a list of 50+ common interview questions you might be asked—plus advice on how to answer each and every one of...
Read more >
Questioning Techniques - Asking Questions Effectively
This guide reviews common questioning techniques, and explains when to use them to get the information you need.
Read more >
How To Successfully Respond To A Question You Really Don ...
From politely declining, to giving information you are willing to share, here are nine ways to address a question you don't want to...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found