Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add input_file_name built-in function #6051

Open
nkarpov opened this issue Apr 18, 2023 · 3 comments
Open

Add input_file_name built-in function #6051

nkarpov opened this issue Apr 18, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@nkarpov
Copy link
Contributor

nkarpov commented Apr 18, 2023

Is your feature request related to a problem or challenge?

It's useful to project the source input file of a data row to support file aware operations, for example for storage frameworks (delta-io/delta-rs#850). This is a built-in function in Spark, for example, https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.input_file_name.html

There was prior work before the repository split but it looks to have lost momentum:

apache/arrow#9944
apache/arrow#9976
apache/arrow#18601

Based on the conversations in the prior PRs and issues it looks like there was consensus that this feature should live in datafusion as opposed to arrow, so creating an issue here.

Describe the solution you'd like

A built-in function supported for both SQL and DataFrame APIs input_file_name() that returns a string of the file from which the row was originally scanned.

Describe alternatives you've considered

No response

Additional context

No response

@nkarpov nkarpov added the enhancement New feature or request label Apr 18, 2023
@SteveLauC
Copy link
Contributor

SteveLauC commented Nov 3, 2023

I am interested in implementing this as requested by discussion#7979, just checked the the previous PRs, and here are my thoughts:

  1. What is the correct semantics of this input_file_name() function

    1. Return all the files registered by a table ~~
    2. The file that a specific row comes from (at runtime/execution time)

    arrow#9944 seems to choose option 1 if I understand correctly, in my use case, I would like to have option 2.

    Update: after taking a detailed look at that PR, I realized it implemented the option 2 by storing the filename in the Schema.metadata

  2. For option 1, this info is stored in different file's ExecutionPlan node, we can fetch it after generating the physical plan

  3. For option 2, this info will be available when different file's XXXOpener type actually opens a file (whether on the local file system or object storage)


Friendly ping @alamb and @jorgecarleitao since you guys were the reviewer of the previous PR, I would like to hear your thoughts! :)

And, haven't made any serious contributions to DataFusion, guidance would be hightly appreciated!

@alamb
Copy link
Contributor

alamb commented Nov 3, 2023

What is the correct semantics of this input_file_name() function

I do not know. However, the use case of " that returns a string of the file from which the row was originally scanned" suggests it would be i in your list: "Return all the files registered by a table"

The spark docs say

Creates a string column for the file name of the current Spark task.

I am not sure how that maps to what file is processed (aka does the same task process multiple input files?)

As for implementation, perhaps it would be possible to model how partition columns are injected, though as you will find that is non trivial to implement.

@alamb
Copy link
Contributor

alamb commented Nov 3, 2023

My "gut feeling" is that this will be a very complicated feature to implement with the existing Listing table provider.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants