question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Standardized description of data imports

See original GitHub issue

It would be worth considering a standardized config file for staging web-accessible data that would. For example data.yml

- src: s3://openneuro.org/ds000113
  dst: /data
  recursive: True

S3, FTP, HTTP and other protocols could be supported. This config file would be read by a script called withing Dockerfile to stage necessary data for the notebook to run. What do people think?

Related to #199.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:19 (12 by maintainers)

github_iconTop GitHub Comments

1reaction
chrisgorgocommented, Feb 15, 2019

Unfortunately, I did not have time to work on this. As a side note, I think that listing different use cases/examples would help scope the feature. One can also try a more data-driven method and use https://www.kaggle.com/github/github-repos/home to see what data intake methods people use in the wild.

On Fri, Feb 15, 2019 at 9:40 AM Chris Holdgraf notifications@github.com wrote:

I am afraid I did not get your point, do you mean we need to create a documentation in binder for repo2data or intake ?

Yep, basically I’m just saying “rather than building technical assumptions and dependencies within repo2docker, we’ll make sure to highlight these tools for data import/export/versioning in the documentation and examples we put together”

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyter/repo2docker/issues/460#issuecomment-464135956, or mute the thread https://github.com/notifications/unsubscribe-auth/AAOkp6Gx5VAlkJ8H5HEKo9QyuFmehyu0ks5vNvD4gaJpZM4YNGJt .

1reaction
choldgrafcommented, Feb 15, 2019

I am afraid I did not get your point, do you mean we need to create a documentation in binder for repo2data or intake ?

Yep, basically I’m just saying “rather than building technical assumptions and dependencies within repo2docker, we’ll make sure to highlight these tools for data import/export/versioning in the documentation and examples we put together”

Read more comments on GitHub >

github_iconTop Results From Across the Web

About Data Import - Analytics Help - Google Support
Data Import lets you upload and integrate information with your Analytics account at 3 different points in the data collection and processing chain....
Read more >
Defining and importing data - HighBond
Define the source data, which means: specify information about the structure and characteristics of the source data so that Analytics can read it....
Read more >
Import Statistics from the US Census Bureau
Import statistics provide detailed statistics on goods and estimates of services entering the U.S. from foreign countries.
Read more >
Bill of Lading Database, Import Export Data: PIERS | S&P Global
The PIERS team integrates this transactional trade data with standardized company details so you can also identify imports and exports by company.
Read more >
Import data (all record types) - Power Platform - Microsoft Learn
You can import data from various systems and data sources into standard and customized columns of most business and custom tables.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found