Standardized description of data imports
See original GitHub issueIt would be worth considering a standardized config file for staging web-accessible data that would. For
example data.yml
- src: s3://openneuro.org/ds000113
dst: /data
recursive: True
S3, FTP, HTTP and other protocols could be supported. This config file would be read by a script called withing Dockerfile to stage necessary data for the notebook to run. What do people think?
Related to #199.
Issue Analytics
- State:
- Created 5 years ago
- Comments:19 (12 by maintainers)
Top Results From Across the Web
About Data Import - Analytics Help - Google Support
Data Import lets you upload and integrate information with your Analytics account at 3 different points in the data collection and processing chain....
Read more >Defining and importing data - HighBond
Define the source data, which means: specify information about the structure and characteristics of the source data so that Analytics can read it....
Read more >Import Statistics from the US Census Bureau
Import statistics provide detailed statistics on goods and estimates of services entering the U.S. from foreign countries.
Read more >Bill of Lading Database, Import Export Data: PIERS | S&P Global
The PIERS team integrates this transactional trade data with standardized company details so you can also identify imports and exports by company.
Read more >Import data (all record types) - Power Platform - Microsoft Learn
You can import data from various systems and data sources into standard and customized columns of most business and custom tables.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Unfortunately, I did not have time to work on this. As a side note, I think that listing different use cases/examples would help scope the feature. One can also try a more data-driven method and use https://www.kaggle.com/github/github-repos/home to see what data intake methods people use in the wild.
On Fri, Feb 15, 2019 at 9:40 AM Chris Holdgraf notifications@github.com wrote:
Yep, basically I’m just saying “rather than building technical assumptions and dependencies within repo2docker, we’ll make sure to highlight these tools for data import/export/versioning in the documentation and examples we put together”