Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limits of CPU/Memory are confusing. Are they per container or per group? #24776

Closed
goenning opened this issue Feb 12, 2019 — with docs.microsoft.com · 3 comments
Closed

Comments

Copy link
Contributor

East US (Linux), for example, has a limit of 4 CPU and 14 GB Memory.

Should I be able to deploy 5 containers of 1 CPU and 1 GB inside a single container Group? Because it doesn't seem possible as I'm getting this error: The requested resource is not available in the location 'eastus' at this moment. Please retry with a different resource request or in another location. Resource requested: '5' CPU '5' GB memory 'Linux' OS

In which case I assume the limit of 4 CPU is per Container Group. In that case, why is there a limit of 60 containers per group? How can I deploy more than 4 containers per group if each container requires at least 1 CPU?

Any information on this regard would be appreciated. Thanks!


Document Details

Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

@Karishma-Tiwari-MSFT
Copy link
Member

Thanks for the feedback! We are currently investigating and will update you shortly.

@Karishma-Tiwari-MSFT
Copy link
Member

@goenning I can totally understand your point. We do have a backlog work item open to make the documentation more clear.

Here is the answer from the doc author which I think will help answer your questions.

You raise a good question, and I see our documentation isn't totally clear on the point. (We have a backlog doc work item to clarify.) The short answer is that the regional limits apply to a container group. ACI allocates container group resources based on the sum of each instance's resource requests. These instances should be able to share the group's CPU and memory resources, unless you specify a resource limit on individual containers. See the ResourceLimits and ResourceRequests descriptions in the REST API.

@Karishma-Tiwari-MSFT
Copy link
Member

@goenning Thanks for bringing this to our attention. We will now close this issue. If there are further questions regarding this matter, please tag me in a comment. I will reopen it and we will gladly continue the discussion.

takuro-sato added a commit to takuro-sato/CCF that referenced this issue Dec 13, 2022
takuro-sato added a commit to takuro-sato/CCF that referenced this issue Dec 21, 2022
Add initial Dockerfile

Fix comment

Add gRPC server functionality

Copy 'CCF SNP' CI

Fix lint

Trigger CI

Trigger CI

Run CI

Run CI

debug

debug

debug

Print debug

Copy deploy_aci as deploy_attestation_container

Try to build and push attestation-container image

Rename ACR registry

Fix git commit ID in CI

Overwrite docker image tag while debugging

Deploy attestation container to ACI

Revert "Deploy attestation container to ACI"

This reverts commit 450808b.

Use the new Azure app for attestation container CI

Try to deploy again

debug

Revert "debug"

This reverts commit 992c5fe.

debug

Try to fix "The requested resource is not available in the location 'westeurope' at this moment."

Use a different region

Revert some changes

Try to deploy attestation container again

Recreate the image with different name

Use less cup/memory resource

MicrosoftDocs/azure-docs#24776

Bookmark (it passes the CI)

wip

debug

Fix pip install

debug

Test gRPC server

fix

fix

Remove sleep

Fix resource group

Fix job name

See log of attestation container

Stop delete aci container group for debug

Fix showing log

Fix CI

Remove unnecessary command from IC

Check /dev/sev

Fetch attestation report

Improve function usage

Implementation which doesn't work

Try to fix

tmp

tmp

tmp

tmp

try cgo

Try everything with C

Use fd of go

Create a msg_report_in in GO

Create msg_report_out in Go

Comment out memset

Create payload in Go

Print attestation report in Go

Somehow fetchAttestationReport needs to be called

tmp

Stop calling fetchAttestationReport

tidy

Use Go struct

Remove C codes

DeserializeReport

Removed unused package

Tidy up

Tidy

Separate gRPC server and the core functionality

tmp

Fix payload

Fix function call

Use `go test` command for e2e test

Fix path

Fix interface of gRPC

Verbose test output

Remove unused codes

Tidy up
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants