Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Opening large save files #25

Open
drserajames opened this issue Aug 22, 2017 · 6 comments
Open

Opening large save files #25

drserajames opened this issue Aug 22, 2017 · 6 comments

Comments

@drserajames
Copy link
Member

I'm working with some larger save files, downloaded from acmacs. For one (7.3MB), I get the following error message when I try to open the save:

16359392 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
Error: An explicit gc call caused a need for 519307264 more bytes of
heap. The operating system will not make the space available
because of a lack of swap space or some other operating system
imposed limit or memory mapping collision.
[condition type: STORAGE-CONDITION]

For a slightly smaller file (5.3MB), I can load the save, but when I open the map, I get the following error message:

Stack overflow (signal 1000)

I haven't attached the files, as it's WHO data so I don't want to put it on github. But I expect any large file would reproduce the problem. Has anyone encountered this before and have a way to get around this?

Thanks,
Sarah

@terrycojones
Copy link
Member

Hi @drserajames

Can you say a bit more about the machine you're trying this on? How much RAM does it have, and do you have many other programs running? If you run open /Applications/Utilities/Activity\ Monitor.app and switch to the Memory tab, how much free memory does it say you have?

I would think the request for 519307264 bytes should normally be ok, as it's just 1/2 a GB and I'm guessing you likely have at least 8GB of RAM. So it might be a matter of shutting down other apps.

If that doesn't work, we can write a few lines of C to make a low-level memory request of that size to see if it fails. Actually I just wrote this to try it on my machine. Save it to a file called memory.c:

$ cat memory.c
#include <stdio.h>
#include <stdlib.h>

int
main(int argc, char **argv)
{
  void *x;
  long size;

  if (argc != 2){
    printf("I need a size argument.\n");
    return 1;
  }

  size = atoi(argv[1]);
  x = malloc(size);

  if (x){
    printf("Allocated %ld bytes.\n", size);
    free(x);
  }
  else {
    printf("Could not allocate %ld bytes.\n", size);
  }
}

Then compile and run it:

$ cc memory.c
$ a.out 519307264
Allocated 519307264 bytes.

If you get a failure on the cc step, try brew install gcc to install a C compiler.


On the second issue, there is a stack limit imposed on processes that you can change. If you do this

$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4864
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

You'll see the stack size limit. You could try upping that (ulimit -s unlimited) to see if you can then work with the smaller file.

@drserajames
Copy link
Member Author

Thanks, Terry.

I'm on my MacBook Pro (15-inch, 2016), macOS Sierra version 10.12.5
Processor: 2.7 GHz Intel Core i7
Memory: 16 GB 2133 MHz LPDDR3
Graphics: Intel HD Graphics 530 1536 MB

Physical memory: 16 GB
Memory used: 6.33 GB
Cached files: 5.88 GB
Swap used 122.8 MB
On the graph, the memory pressure looks about a quarter and doesn't increase when I run lispmds.

I was running a few other programs. I closed everything apart from a terminal, some text files and the activity monitor. I still get the same problem:

CL-USER(1):
8872088 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
8701200 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
8409992 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
9967240 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
9420760 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
24911040 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
18796512 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
16641616 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
Error: An explicit gc call caused a need for 523501568 more bytes of
heap. The operating system will not make the space available
because of a lack of swap space or some other operating system
imposed limit or memory mapping collision.
[condition type: STORAGE-CONDITION]

I ran memory.c with no problems:
$ cc memory.c
$ a.out 519307264
Allocated 519307264 bytes.


When I try to open the smaller file, I get the error when the table loads (not when I try to load the map - my earlier post is wrong).

CL-USER(1):
8968480 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
9563176 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
9571408 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
11165088 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
13491736 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
18113728 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
9217856 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
14751880 bytes have been tenured, next gc will be global.
See the documentation for variable GLOBAL-GC-BEHAVIOR for more information.
Stack overflow (signal 1000)

$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited

I'm not allowed to modify the stack size:
$ ulimit -s unlimited
-bash: ulimit: stack size: cannot modify limit: Operation not permitted

Thanks for your help. As you can tell, I have no understanding of what's happening.

@terrycojones
Copy link
Member

I'm not sure what to do about the first problem. Let's wait to see if @dsmithgithub has anything to say.

If you use a numeric value in the ulimit command, it will (sometimes) let you do it. E.g., ulimit -s 12000. I don't understand either...

$ bash
$ ulimit -s 12000
$ ulimit -s 9000
$ ulimit -s 15000
bash: ulimit: stack size: cannot modify limit: Operation not permitted
$ ulimit -s 9000
$ ulimit -s 12000
bash: ulimit: stack size: cannot modify limit: Operation not permitted
$ ulimit -s 10000
bash: ulimit: stack size: cannot modify limit: Operation not permitted
$ exit

As you can see, I do that all in a new shell just to play around. When you do the final exit you'll be back in your original (undisturbed) shell. You don't have to do it that way, it's just an easier way to play around without messing up your current shell. I don't know why ulimit -s 12000 works the first time and then fails.

@davidfburke
Copy link

Once a limit has been decreased by a ulimit call, it cannot be increased apart from as su. Try creating another shell and setting as unlimited.

In lisp, typing
(room)
gives details of the heap.

@dsmithgithub
Copy link
Contributor

dsmithgithub commented Aug 22, 2017 via email

@drserajames
Copy link
Member Author

Thanks all.

Yes, more than 2000 points so that is the problem.

I'm using the licensed copy of Allegro CL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants