-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configure using instance metadata #1
base: master
Are you sure you want to change the base?
Conversation
Thanks for the pull request! Nerves is definitely bringing new challenges - I have no experience with Elixir starting faster than the EC2 instance metadata :-) Additional questions or remarks:
Cheers |
Hi Peter, Did you see any specific problem with ExAws? That's what I tested with, and it worked for me. ExAws does its own credential loading, but it doesn't deal with the metadata not being available, and doesn't dynamically configure things like the region that it could. So it doesn't hurt to wait for the metadata to be available before trying to connect to ExAws. Probably the right thing to do is enhance ExAws a bit. I didn't get the AWS module working, as it has funny runtime dependencies, e.g. trying to load tzdata from the internet as part of it's startup process. I generally like the idea of generating the library from the metadata maintained by Amazon, as it's likely to be up to date and support the sprawl of new services. It doesn't seem to be as popular or well maintained, though. I think that this code would work for AWS, but need to test in a non-Nerves environment. I wanted to make the PR to you now as you were currently working on it. It's necessary to periodically refresh the credentials from the instance profile, as they expire. I didn't specifically set timeout on hackney. In AWS, it generally responds quickly, but the data is not available. Perhaps it would be useful to set it to something short and let the retry logic handle it. A hook for setting the stream name might be interesting. We do some thing similar on Elasticsearch by logging to time based indexes which we expire. I am finding CloudWatch Logs to be relatively weak, so I am thinking about logging messages to Kinesis, feeding them to Elasticsearch and simultaneously making them available to the developer in real time. My overall goal is what I am calling "Cloud Native Elixir", including a minimal legacy-free runtime instance which has monitoring and logging included, deploying using CodeBuild/CodePipeline/CodeDeploy, and integrating well with other Amazon services for configuration (e.g. Parameter Store, KMS) and data storage (S3, RDS, etc). Cheers, PS, perhaps easier to chat in real time. I am jakemorrison on Elixir Slack and Discord, and reachfh on IRC. |
Hi Jake, When I run my app outside of EC2, the ExAws client is never activated. There is an endless loop where Please note that all the credentials obtained in The core of what you are trying to achieve is a delayed start of ExAws client. An ideal solution would be to enhance the ExAws with something like
Strange, I'd love to understand details... Quick browse shows that https://github.com/ex-aws/ex_aws/blob/master/lib/ex_aws/instance_meta.ex does not retrieve the region. As a workaround, set region to an env variable: EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
I found such problem when using Timex for Logger.format. Timex depends on tzdata, I had to switch to DateTime (I didn't learn how to disable tzdata updating timezone info). I contacted you on Elixir Slack as pmenhart. Thanks |
* This is an experimental code. Two options are explored: 1. Delay the transfer then re-try. Consequences are unknown. Risk of compromising system stability. 2. Remove log messages from the buffer (which delays the transfer implicitly). This is safer, but some messages are lost. Option #1 is used by default. To trigger option heyoutline#2, set config purge_buffer_if_throttled: true
* This is an experimental code. Two options are explored: 1. Remove log messages from the buffer (which delays the transfer implicitly). This is safer, but some messages are lost. 2. Delay the transfer then re-try. Consequences are unknown. Risk of compromising system stability. Option heyoutline#2 is used by default. To trigger option #1, set config purge_buffer_if_throttled: true
* This is an experimental code. Two options are explored: 1. Remove log messages from the buffer (which delays the transfer implicitly). This is safer, but some messages are lost. 2. Delay the transfer then re-try. Consequences are unknown. Risk of compromising system stability. Option heyoutline#2 is used by default. To trigger option #1, set config purge_buffer_if_throttled: true * Also added delay after connect_timeout, and logging of successful flushes
* This is an experimental code. Two options are explored: 1. Remove log messages from the buffer (which delays the transfer implicitly). This is safer, but some messages are lost. 2. Delay the transfer then re-try. Consequences are unknown. Risk of compromising system stability. Option heyoutline#2 is used by default. To trigger option #1, set config purge_buffer_if_throttled: true * Also added delay after connect_timeout, and logging of successful flushes (only to CloudWatch)
* This is an experimental code. Two options are explored: 1. Remove log messages from the buffer (which delays the transfer implicitly). This is safer, but some messages are lost. 2. Delay the transfer then re-try. Consequences are unknown. Risk of compromising system stability. Option heyoutline#2 is used by default. To trigger option #1, set config purge_buffer_if_throttled: true * Also added delay after connection/timeout errors * Added logging of successful flushes (temporary; only to CloudWatch)
* This is an experimental code. Two options are explored: 1. Remove log messages from the buffer (which delays the transfer implicitly). This is safer, but some messages are lost. 2. Delay the transfer then re-try. Consequences are unknown. Risk of compromising system stability. Option heyoutline#2 is used by default. To trigger option #1, set config purge_buffer_if_throttled: true * Added logging of successful flushes (only to CloudWatch backend). Useful for troubleshooting; currently commented out
* This is an experimental code. Two options are explored: 1. Remove log messages from the buffer (which delays the transfer implicitly). This is safer, but some messages are lost. 2. Delay the transfer then re-try. Consequences are unknown. Risk of compromising system stability. Option heyoutline#2 is used by default. To trigger option #1, set config purge_buffer_if_throttled: true * Added logging of successful flushes (only to CloudWatch backend). Useful for troubleshooting; currently commented out * Added heap limit, restricting the Logger process to a hardwired value of 32MiB (including message queue)
f72ac7e
to
ff3f8f5
Compare
I am using this module with Nerves, so logging runs very early in the boot process. I reorganized the initialization and configuration process so that it works reliably.