Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

VARCHAR fields are padded with additional characters until field's maximum length is reached when trimming is disabled

See original GitHub issue

First of all, thank you for all the effort that you’ve put in releasing this library and sharing it. It looks great.

We’ve been testing it and we’ve detected that when we process ebcdic files with a varchar field at the end of the record, when it gets translated into ascii(utf-8) it gets padded with additional characters at the end of the field until it reaches the maximum length defined for that field in the copybook. Obviously this happens only when the varchar field in the ebcdic file uses less bytes than the maximum specified in the copybook and cobrix-trimming-option is set to “none”.

For instance, given the following copybook (taken fom

01  R.
  03 N     PIC X(1).
  03 V     PIC X(10).

If the input ebcdic record was:

| RDW                | N    | V    |
|0x00 0x00 0x02 0x00 | 0xF4 | 0xF1 |

The expected output when trimming is set to “none” should be just 2 bytes:

  • N -> the ebcdic2ascii translation for 0xF4
  • V -> the ebcdic2ascii translation for 0xF1

But instead of that the actual output is 11 bytes:

  • N -> the ebcdic2ascii translation for 0xF4 -> OK
  • V -> the ebcdic2ascii translation for 0xF1 plus 9 additional characters (whitespaces or non-printable, depending on the codepage used) -> Not OK

Would it be possible to fix this so that the translated field doesn’t get those extra characters that don’t come from the source field ?

Also, are there any plans to support multiple varchar fields in the copybook ?

Thank you!

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7

github_iconTop GitHub Comments

davidrg1978commented, Oct 18, 2019

Snapshot version is not slower that 1.0.1. Whatever our performance issue is, it’s not caused by the snapshot version.

yruslancommented, Oct 18, 2019

Cool, thanks! Please, let me know if the snapshot version is slower than 1.0.1. If no performance degradation is observed in 1.0.2-SNAPSHOT, we will release 1.0.2 on Monday.

Read more comments on GitHub >

github_iconTop Results From Across the Web

11.3.2 The CHAR and VARCHAR Types
The effective maximum length of a VARCHAR is subject to the maximum row size (65,535 bytes, which is shared among all columns) and...
Read more >
Trim string field in JPA - java - Stack Overflow
I would like the string field representing this column in my entity class to always contain the trimmed value, not the 20-character value...
Read more >
Field Properties - Documentation for Remedy Action Request ...
Specifies the width of the field's data entry region in pixels. This often differs from the maximum length for data entered in the...
Read more >
5 SQL*Loader Control File Reference
BLANKS is required when you are loading delimited data and you cannot predict the length of the field, or when you use a...
Read more >
Database Engine events and errors - SQL Server
Note the error and time, and contact your system administrator. 101, 15, No, Query not allowed in Wait for. 102, 15, No, Incorrect...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found