Can I set task memory limit higher than 2GB

Tag: hadoop Author: wangyanzhe1314 Date: 2010-08-09

Hadoop map-reduce configuration provides the mapred.task.limit.maxvmem and mapred.task.default.maxvmem. According to the documentation both of these are values of type long that is anumber, in bytes, that represents the default/upper VMEM task-limit associated with a task. It appears that meaning of "long" in this context is 32bit and setting values higher than 2GB may lead to negative values being used as limit. I am running on 64 bit system and 2GB is much lower limit than I actually want to impose.

Is there any way around this limitation?

I am using hadoop version 0.20.1

Other Answer1

The long in this context refers to the amount of space required to store the setting not the actual amount of memory that can be addressed. So, you can use a minimum value of -9,223,372,036,854,775,808 and a maximum value of 9,223,372,036,854,775,807 inclusive. But, usually a long represents 64bits of data anyway.