Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cryptic R errors need better reporting #32

Open
lhsego opened this issue Oct 20, 2015 · 1 comment
Open

Cryptic R errors need better reporting #32

lhsego opened this issue Oct 20, 2015 · 1 comment

Comments

@lhsego
Copy link
Member

lhsego commented Oct 20, 2015

This error, which I produced using datadr::makeDisplay() on an hdfsConn ddo,

---------------------------------
There were R errors, showing 30:

Warning message:
Autokill is true and terminating job_1441994449703_0070
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl,  : 
  java.io.FileNotFoundException: Cannot access /user/d3p423/tmp/tmp_output-6ae2a6d4f0966b3244a47a8f588f6c36: No such file or directory.
Calls: makeDisplay ... <Anonymous> -> .jrcall -> .jcall -> .jcheck -> .Call
In addition: Warning message:
In Rhipe:::rhwatch.runner(job = job, mon.sec = mon.sec, readback = readback,  :
 Job failure, deleting output: /user/d3p423/tmp/tmp_output-6ae2a6d4f0966b3244a47a8f588f6c36:   
Execution halted 
Warning message:
system call failed: Cannot allocate memory 

is apparently a red-herring when there are R errors on the hadoop job--but the only error that is well described is the Java error not being able to access a file. Would sure be nice to have a sense of what the R errors are.

@saptarshiguha
Copy link
Contributor

Hmm, true. I have a feeling that you'll find the R error in the job log and
for some reason it didn't make it's way back to the R console.

On Mon, Oct 19, 2015 at 7:50 PM, Landon Sego [email protected]
wrote:

This error, which I produced using datadr::makeDisplay() on an hdfsConn
ddo,


There were R errors, showing 30:

Warning message:
Autokill is true and terminating job_1441994449703_0070
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl, :
java.io.FileNotFoundException: Cannot access /user/d3p423/tmp/tmp_output-6ae2a6d4f0966b3244a47a8f588f6c36: No such file or directory.
Calls: makeDisplay ... -> .jrcall -> .jcall -> .jcheck -> .Call
In addition: Warning message:
In Rhipe:::rhwatch.runner(job = job, mon.sec = mon.sec, readback = readback, :
Job failure, deleting output: /user/d3p423/tmp/tmp_output-6ae2a6d4f0966b3244a47a8f588f6c36:
Execution halted
Warning message:
system call failed: Cannot allocate memory

is apparently a red-herring when there are R errors on the hadoop job--but
the only error that is well described is the Java error not being able to
access a file. Would sure be nice to have a sense of what the R errors are.


Reply to this email directly or view it on GitHub
#32.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants