The interface used by clients to obtain a new {@link ApplicationId} for submitting new applications.

The ResourceManager responds with a new, monotonically increasing, {@link ApplicationId} which is used by the client to submit a new application.

The ResourceManager also responds with details such as maximum resource capabilities in the cluster as specified in {@link GetNewApplicationResponse}.

@param request request to get a new ApplicationId @return response containing the new ApplicationId to be used to submit an application @throws YarnException @throws IOException @see #submitApplication(SubmitApplicationRequest)]]>
The interface used by clients to submit a new application to the ResourceManager.

The client is required to provide details such as queue, {@link Resource} required to run the ApplicationMaster, the equivalent of {@link ContainerLaunchContext} for launching the ApplicationMaster etc. via the {@link SubmitApplicationRequest}.

Currently the ResourceManager sends an immediate (empty) {@link SubmitApplicationResponse} on accepting the submission and throws an exception if it rejects the submission. However, this call needs to be followed by {@link #getApplicationReport(GetApplicationReportRequest)} to make sure that the application gets properly submitted - obtaining a {@link SubmitApplicationResponse} from ResourceManager doesn't guarantee that RM 'remembers' this application beyond failover or restart. If RM failover or RM restart happens before ResourceManager saves the application's state successfully, the subsequent {@link #getApplicationReport(GetApplicationReportRequest)} will throw a {@link ApplicationNotFoundException}. The Clients need to re-submit the application with the same {@link ApplicationSubmissionContext} when it encounters the {@link ApplicationNotFoundException} on the {@link #getApplicationReport(GetApplicationReportRequest)} call.

During the submission process, it checks whether the application already exists. If the application exists, it will simply return SubmitApplicationResponse

In secure mode,the ResourceManager verifies access to queues etc. before accepting the application submission.

@param request request to submit a new application @return (empty) response on accepting the submission @throws YarnException @throws IOException @see #getNewApplication(GetNewApplicationRequest)]]>
The interface used by clients to request the ResourceManager to fail an application attempt.

The client, via {@link FailApplicationAttemptRequest} provides the {@link ApplicationAttemptId} of the attempt to be failed.

In secure mode,the ResourceManager verifies access to the application, queue etc. before failing the attempt.

Currently, the ResourceManager returns an empty response on success and throws an exception on rejecting the request.

@param request request to fail an attempt @return ResourceManager returns an empty response on success and throws an exception on rejecting the request @throws YarnException @throws IOException @see #getQueueUserAcls(GetQueueUserAclsInfoRequest)]]>
The interface used by clients to request the ResourceManager to abort submitted application.

The client, via {@link KillApplicationRequest} provides the {@link ApplicationId} of the application to be aborted.

In secure mode,the ResourceManager verifies access to the application, queue etc. before terminating the application.

Currently, the ResourceManager returns an empty response on success and throws an exception on rejecting the request.

@param request request to abort a submitted application @return ResourceManager returns an empty response on success and throws an exception on rejecting the request @throws YarnException @throws IOException @see #getQueueUserAcls(GetQueueUserAclsInfoRequest)]]>
The interface used by clients to get metrics about the cluster from the ResourceManager.

The ResourceManager responds with a {@link GetClusterMetricsResponse} which includes the {@link YarnClusterMetrics} with details such as number of current nodes in the cluster.

@param request request for cluster metrics @return cluster metrics @throws YarnException @throws IOException]]>
The interface used by clients to get a report of all nodes in the cluster from the ResourceManager.

The ResourceManager responds with a {@link GetClusterNodesResponse} which includes the {@link NodeReport} for all the nodes in the cluster.

@param request request for report on all nodes @return report on all nodes @throws YarnException @throws IOException]]>
The interface used by clients to get information about queues from the ResourceManager.

The client, via {@link GetQueueInfoRequest}, can ask for details such as used/total resources, child queues, running applications etc.

In secure mode,the ResourceManager verifies access before providing the information.

@param request request to get queue information @return queue information @throws YarnException @throws IOException]]>
The interface used by clients to get information about queue acls for current user from the ResourceManager.

The ResourceManager responds with queue acls for all existing queues.

@param request request to get queue acls for current user @return queue acls for current user @throws YarnException @throws IOException]]>
The interface used by clients to obtain a new {@link ReservationId} for submitting new reservations.

The ResourceManager responds with a new, unique, {@link ReservationId} which is used by the client to submit a new reservation.

@param request to get a new ReservationId @return response containing the new ReservationId to be used to submit a new reservation @throws YarnException if the reservation system is not enabled. @throws IOException on IO failures. @see #submitReservation(ReservationSubmissionRequest)]]>
The interface used by clients to submit a new reservation to the {@code ResourceManager}.

The client packages all details of its request in a {@link ReservationSubmissionRequest} object. This contains information about the amount of capacity, temporal constraints, and concurrency needs. Furthermore, the reservation might be composed of multiple stages, with ordering dependencies among them.

In order to respond, a new admission control component in the {@code ResourceManager} performs an analysis of the resources that have been committed over the period of time the user is requesting, verify that the user requests can be fulfilled, and that it respect a sharing policy (e.g., {@code CapacityOverTimePolicy}). Once it has positively determined that the ReservationSubmissionRequest is satisfiable the {@code ResourceManager} answers with a {@link ReservationSubmissionResponse} that include a non-null {@link ReservationId}. Upon failure to find a valid allocation the response is an exception with the reason. On application submission the client can use this {@link ReservationId} to obtain access to the reserved resources.

The system guarantees that during the time-range specified by the user, the reservationID will be corresponding to a valid reservation. The amount of capacity dedicated to such queue can vary overtime, depending of the allocation that has been determined. But it is guaranteed to satisfy all the constraint expressed by the user in the {@link ReservationSubmissionRequest}.

@param request the request to submit a new Reservation @return response the {@link ReservationId} on accepting the submission @throws YarnException if the request is invalid or reservation cannot be created successfully @throws IOException]]>
The interface used by clients to update an existing Reservation. This is referred to as a re-negotiation process, in which a user that has previously submitted a Reservation.

The allocation is attempted by virtually substituting all previous allocations related to this Reservation with new ones, that satisfy the new {@link ReservationUpdateRequest}. Upon success the previous allocation is substituted by the new one, and on failure (i.e., if the system cannot find a valid allocation for the updated request), the previous allocation remains valid. The {@link ReservationId} is not changed, and applications currently running within this reservation will automatically receive the resources based on the new allocation.

@param request to update an existing Reservation (the ReservationRequest should refer to an existing valid {@link ReservationId}) @return response empty on successfully updating the existing reservation @throws YarnException if the request is invalid or reservation cannot be updated successfully @throws IOException]]>
The interface used by clients to remove an existing Reservation. Upon deletion of a reservation applications running with this reservation, are automatically downgraded to normal jobs running without any dedicated reservation.

@param request to remove an existing Reservation (the ReservationRequest should refer to an existing valid {@link ReservationId}) @return response empty on successfully deleting the existing reservation @throws YarnException if the request is invalid or reservation cannot be deleted successfully @throws IOException]]>
The interface used by clients to get the list of reservations in a plan. The reservationId will be used to search for reservations to list if it is provided. Otherwise, it will select active reservations within the startTime and endTime (inclusive).

@param request to list reservations in a plan. Contains fields to select String queue, ReservationId reservationId, long startTime, long endTime, and a bool includeReservationAllocations. queue: Required. Cannot be null or empty. Refers to the reservable queue in the scheduler that was selected when creating a reservation submission {@link ReservationSubmissionRequest}. reservationId: Optional. If provided, other fields will be ignored. startTime: Optional. If provided, only reservations that end after the startTime will be selected. This defaults to 0 if an invalid number is used. endTime: Optional. If provided, only reservations that start on or before endTime will be selected. This defaults to Long.MAX_VALUE if an invalid number is used. includeReservationAllocations: Optional. Flag that determines whether the entire reservation allocations are to be returned. Reservation allocations are subject to change in the event of re-planning as described by {@code ReservationDefinition}. @return response that contains information about reservations that are being searched for. @throws YarnException if the request is invalid @throws IOException on IO failures]]>
The interface used by client to get node to labels mappings in existing cluster

@param request @return node to labels mappings @throws YarnException @throws IOException]]>
The interface used by client to get labels to nodes mappings in existing cluster

@param request @return labels to nodes mappings @throws YarnException @throws IOException]]>
The interface used by client to get node labels in the cluster

@param request to get node labels collection of this cluster @return node labels collection of this cluster @throws YarnException @throws IOException]]>
The interface used by client to set priority of an application.

@param request to set priority of an application @return an empty response @throws YarnException @throws IOException]]>
The interface used by clients to request the ResourceManager to signal a container. For example, the client can send command OUTPUT_THREAD_DUMP to dump threads of the container.

The client, via {@link SignalContainerRequest} provides the id of the container and the signal command.

In secure mode,the ResourceManager verifies access to the application before signaling the container. The user needs to have MODIFY_APP permission.

Currently, the ResourceManager returns an empty response on success and throws an exception on rejecting the request.

@param request request to signal a container @return ResourceManager returns an empty response on success and throws an exception on rejecting the request @throws YarnException @throws IOException]]>
The interface used by client to set ApplicationTimeouts of an application. The UpdateApplicationTimeoutsRequest should have timeout value with absolute time with ISO8601 format yyyy-MM-dd'T'HH:mm:ss.SSSZ.

Note: If application timeout value is less than or equal to current time then update application throws YarnException. @param request to set ApplicationTimeouts of an application @return a response with updated timeouts. @throws YarnException if update request has empty values or application is in completing states. @throws IOException on IO failures]]>
The interface to get the details for a specific resource profile.

@param request request to get the details of a resource profile @return Response containing the details for a particular resource profile @throws YarnException if any error happens inside YARN @throws IOException in case of other errors]]>
The protocol between clients and the ResourceManager to submit/abort jobs and to get information on applications, cluster metrics, nodes, queues and ACLs.

]]>
The protocol between clients and the ApplicationHistoryServer to get the information of completed applications etc.

]]>
The interface used by a new ApplicationMaster to register with the ResourceManager.

The ApplicationMaster needs to provide details such as RPC Port, HTTP tracking url etc. as specified in {@link RegisterApplicationMasterRequest}.

The ResourceManager responds with critical details such as maximum resource capabilities in the cluster as specified in {@link RegisterApplicationMasterResponse}.

Re-register is only allowed for Unmanaged Application Master (UAM) HA, with {@link org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext#getKeepContainersAcrossApplicationAttempts()} set to true.

@param request registration request @return registration respose @throws YarnException @throws IOException @throws InvalidApplicationMasterRequestException The exception is thrown when an ApplicationMaster tries to register more then once. @see RegisterApplicationMasterRequest @see RegisterApplicationMasterResponse]]>
The interface used by an ApplicationMaster to notify the ResourceManager about its completion (success or failed).

The ApplicationMaster has to provide details such as final state, diagnostics (in case of failures) etc. as specified in {@link FinishApplicationMasterRequest}.

The ResourceManager responds with {@link FinishApplicationMasterResponse}.

@param request completion request @return completion response @throws YarnException @throws IOException @see FinishApplicationMasterRequest @see FinishApplicationMasterResponse]]>
The main interface between an ApplicationMaster and the ResourceManager.

The ApplicationMaster uses this interface to provide a list of {@link ResourceRequest} and returns unused {@link Container} allocated to it via {@link AllocateRequest}. Optionally, the ApplicationMaster can also blacklist resources which it doesn't want to use.

This also doubles up as a heartbeat to let the ResourceManager know that the ApplicationMaster is alive. Thus, applications should periodically make this call to be kept alive. The frequency depends on {@link YarnConfiguration#RM_AM_EXPIRY_INTERVAL_MS} which defaults to {@link YarnConfiguration#DEFAULT_RM_AM_EXPIRY_INTERVAL_MS}.

The ResourceManager responds with list of allocated {@link Container}, status of completed containers and headroom information for the application.

The ApplicationMaster can use the available headroom (resources) to decide how to utilized allocated resources and make informed decisions about future resource requests.

@param request allocation request @return allocation response @throws YarnException @throws IOException @throws InvalidApplicationMasterRequestException This exception is thrown when an ApplicationMaster calls allocate without registering first. @throws InvalidResourceBlacklistRequestException This exception is thrown when an application provides an invalid specification for blacklist of resources. @throws InvalidResourceRequestException This exception is thrown when a {@link ResourceRequest} is out of the range of the configured lower and upper limits on the resources. @see AllocateRequest @see AllocateResponse]]>
The protocol between a live instance of ApplicationMaster and the ResourceManager.

This is used by the ApplicationMaster to register/unregister and to request and obtain resources in the cluster from the ResourceManager.

]]>
The interface used by clients to claim a resource with the SharedCacheManager. The client uses a checksum to identify the resource and an {@link ApplicationId} to identify which application will be using the resource.

The SharedCacheManager responds with whether or not the resource exists in the cache. If the resource exists, a Path to the resource in the shared cache is returned. If the resource does not exist, the response is empty.

@param request request to claim a resource in the shared cache @return response indicating if the resource is already in the cache @throws YarnException @throws IOException]]>
The interface used by clients to release a resource with the SharedCacheManager. This method is called once an application is no longer using a claimed resource in the shared cache. The client uses a checksum to identify the resource and an {@link ApplicationId} to identify which application is releasing the resource.

Note: This method is an optimization and the client is not required to call it for correctness.

Currently the SharedCacheManager sends an empty response.

@param request request to release a resource in the shared cache @return (empty) response on releasing the resource @throws YarnException @throws IOException]]>
The protocol between clients and the SharedCacheManager to claim and release resources in the shared cache.

]]>
The ApplicationMaster provides a list of {@link StartContainerRequest}s to a NodeManager to start {@link Container}s allocated to it using this interface.

The ApplicationMaster has to provide details such as allocated resource capability, security tokens (if enabled), command to be executed to start the container, environment for the process, necessary binaries/jar/shared-objects etc. via the {@link ContainerLaunchContext} in the {@link StartContainerRequest}.

The NodeManager sends a response via {@link StartContainersResponse} which includes a list of {@link Container}s of successfully launched {@link Container}s, a containerId-to-exception map for each failed {@link StartContainerRequest} in which the exception indicates errors from per container and a allServicesMetaData map between the names of auxiliary services and their corresponding meta-data. Note: None-container-specific exceptions will still be thrown by the API method itself.

The ApplicationMaster can use {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated statuses of the to-be-launched or launched containers.

@param request request to start a list of containers @return response including conatinerIds of all successfully launched containers, a containerId-to-exception map for failed requests and a allServicesMetaData map. @throws YarnException @throws IOException]]>
The ApplicationMaster requests a NodeManager to stop a list of {@link Container}s allocated to it using this interface.

The ApplicationMaster sends a {@link StopContainersRequest} which includes the {@link ContainerId}s of the containers to be stopped.

The NodeManager sends a response via {@link StopContainersResponse} which includes a list of {@link ContainerId} s of successfully stopped containers, a containerId-to-exception map for each failed request in which the exception indicates errors from per container. Note: None-container-specific exceptions will still be thrown by the API method itself. ApplicationMaster can use {@link #getContainerStatuses(GetContainerStatusesRequest)} to get updated statuses of the containers.

@param request request to stop a list of containers @return response which includes a list of containerIds of successfully stopped containers, a containerId-to-exception map for failed requests. @throws YarnException @throws IOException]]>
The API used by the ApplicationMaster to request for current statuses of Containers from the NodeManager.

The ApplicationMaster sends a {@link GetContainerStatusesRequest} which includes the {@link ContainerId}s of all containers whose statuses are needed.

The NodeManager responds with {@link GetContainerStatusesResponse} which includes a list of {@link ContainerStatus} of the successfully queried containers and a containerId-to-exception map for each failed request in which the exception indicates errors from per container. Note: None-container-specific exceptions will still be thrown by the API method itself.

@param request request to get ContainerStatuses of containers with the specified ContainerIds @return response containing the list of ContainerStatus of the successfully queried containers and a containerId-to-exception map for failed requests. @throws YarnException @throws IOException]]>
The API used by the ApplicationMaster to request for resource increase of running containers on the NodeManager.

@param request request to increase resource of a list of containers @return response which includes a list of containerIds of containers whose resource has been successfully increased and a containerId-to-exception map for failed requests. @throws YarnException @throws IOException]]>
The API used by the ApplicationMaster to request for resource update of running containers on the NodeManager.

@param request request to update resource of a list of containers @return response which includes a list of containerIds of containers whose resource has been successfully updated and a containerId-to-exception map for failed requests. @throws YarnException Exception specific to YARN @throws IOException IOException thrown from NodeManager]]>
The protocol between an ApplicationMaster and a NodeManager to start/stop and increase resource of containers and to get status of running containers.

If security is enabled the NodeManager verifies that the ApplicationMaster has truly been allocated the container by the ResourceManager and also verifies all interactions such as stopping the container or obtaining status information for the container.

]]>
response id used to track duplicate responses. @return response id]]> response id used to track duplicate responses. @param id response id]]> current progress of application. @return current progress of application]]> current progress of application @param progress current progress of application]]> ResourceRequest to update the ResourceManager about the application's resource requirements. @return the list of ResourceRequest @see ResourceRequest]]> ResourceRequest to update the ResourceManager about the application's resource requirements. @param resourceRequests list of ResourceRequest to update the ResourceManager about the application's resource requirements @see ResourceRequest]]> ContainerId of containers being released by the ApplicationMaster. @return list of ContainerId of containers being released by the ApplicationMaster]]> ContainerId of containers being released by the ApplicationMaster @param releaseContainers list of ContainerId of containers being released by the ApplicationMaster]]> ResourceBlacklistRequest being sent by the ApplicationMaster. @return the ResourceBlacklistRequest being sent by the ApplicationMaster @see ResourceBlacklistRequest]]> ResourceBlacklistRequest to inform the ResourceManager about the blacklist additions and removals per the ApplicationMaster. @param resourceBlacklistRequest the ResourceBlacklistRequest to inform the ResourceManager about the blacklist additions and removals per the ApplicationMaster @see ResourceBlacklistRequest]]> ApplicationMaster. @return list of {@link UpdateContainerRequest} being sent by the ApplicationMaster.]]> ResourceManager about the containers that need to be updated. @param updateRequests list of UpdateContainerRequest for containers to be updated]]> The core request sent by the ApplicationMaster to the ResourceManager to obtain resources in the cluster.

The request includes:

  • A response id to track duplicate responses.
  • Progress information.
  • A list of {@link ResourceRequest} to inform the ResourceManager about the application's resource requirements.
  • A list of unused {@link Container} which are being returned.
  • A list of {@link UpdateContainerRequest} to inform the ResourceManager about the change in requirements of running containers.
@see ApplicationMasterProtocol#allocate(AllocateRequest)]]>
responseId of the request. @see AllocateRequest#setResponseId(int) @param responseId responseId of the request @return {@link AllocateRequestBuilder}]]> progress of the request. @see AllocateRequest#setProgress(float) @param progress progress of the request @return {@link AllocateRequestBuilder}]]> askList of the request. @see AllocateRequest#setAskList(List) @param askList askList of the request @return {@link AllocateRequestBuilder}]]> releaseList of the request. @see AllocateRequest#setReleaseList(List) @param releaseList releaseList of the request @return {@link AllocateRequestBuilder}]]> resourceBlacklistRequest of the request. @see AllocateRequest#setResourceBlacklistRequest( ResourceBlacklistRequest) @param resourceBlacklistRequest resourceBlacklistRequest of the request @return {@link AllocateRequestBuilder}]]> updateRequests of the request. @see AllocateRequest#setUpdateRequests(List) @param updateRequests updateRequests of the request @return {@link AllocateRequestBuilder}]]> trackingUrl of the request. @see AllocateRequest#setTrackingUrl(String) @param trackingUrl new tracking url @return {@link AllocateRequestBuilder}]]> ResourceManager needs the ApplicationMaster to take some action then it will send an AMCommand to the ApplicationMaster. See AMCommand for details on commands and actions for them. @return AMCommand if the ApplicationMaster should take action, null otherwise @see AMCommand]]> last response id. @return last response id]]> newly allocated Container by the ResourceManager. @return list of newly allocated Container]]> available headroom for resources in the cluster for the application. @return limit of available headroom for resources in the cluster for the application]]> completed containers' statuses. @return the list of completed containers' statuses]]> updated NodeReports. Updates could be changes in health, availability etc of the nodes. @return The delta of updated nodes since the last response]]> The message is a snapshot of the resources the RM wants back from the AM. While demand persists, the RM will repeat its request; applications should not interpret each message as a request for additional resources on top of previous messages. Resources requested consistently over some duration may be forcibly killed by the RM. @return A specification of the resources to reclaim from this AM.]]> 1) AM is receiving first container on underlying NodeManager.
OR
2) NMToken master key rolled over in ResourceManager and AM is getting new container on the same underlying NodeManager.

AM will receive one NMToken per NM irrespective of the number of containers issued on same NM. AM is expected to store these tokens until issued a new token for the same NM. @return list of NMTokens required for communicating with NM]]> ResourceManager. @return list of newly increased containers]]> UpdateContainerError for containers updates requests that were in error]]> ResourceManager the ApplicationMaster during resource negotiation.

The response, includes:

  • Response ID to track duplicate responses.
  • An AMCommand sent by ResourceManager to let the {@code ApplicationMaster} take some actions (resync, shutdown etc.).
  • A list of newly allocated {@link Container}.
  • A list of completed {@link Container}s' statuses.
  • The available headroom for resources in the cluster for the application.
  • A list of nodes whose status has been updated.
  • The number of available nodes in a cluster.
  • A description of resources requested back by the cluster
  • AMRMToken, if AMRMToken has been rolled over
  • A list of {@link Container} representing the containers whose resource has been increased.
  • A list of {@link Container} representing the containers whose resource has been decreased.
@see ApplicationMasterProtocol#allocate(AllocateRequest)]]>
Note: {@link NMToken} will be used for authenticating communication with {@code NodeManager}. @return the list of container tokens to be used for authorization during container resource update. @see NMToken]]> AllocateResponse.getUpdatedContainers. The token contains the container id and resource capability required for container resource update. @param containersToUpdate the list of container tokens to be used for container resource increase.]]> The request sent by Application Master to the Node Manager to change the resource quota of a container.

@see ContainerManagementProtocol#updateContainer(ContainerUpdateRequest)]]>
The response sent by the NodeManager to the ApplicationMaster when asked to update container resource.

@see ContainerManagementProtocol#updateContainer(ContainerUpdateRequest)]]>
ApplicationAttemptId of the attempt to be failed. @return ApplicationAttemptId of the attempt.]]> The request sent by the client to the ResourceManager to fail an application attempt.

The request includes the {@link ApplicationAttemptId} of the attempt to be failed.

@see ApplicationClientProtocol#failApplicationAttempt(FailApplicationAttemptRequest)]]>
The response sent by the ResourceManager to the client failing an application attempt.

Currently it's empty.

@see ApplicationClientProtocol#failApplicationAttempt(FailApplicationAttemptRequest)]]>
final state of the ApplicationMaster. @return final state of the ApplicationMaster]]> final state of the ApplicationMaster @param finalState final state of the ApplicationMaster]]> diagnostic information on application failure. @return diagnostic information on application failure]]> diagnostic information on application failure. @param diagnostics diagnostic information on application failure]]> tracking URL for the ApplicationMaster. This url if contains scheme then that will be used by resource manager web application proxy otherwise it will default to http. @return tracking URLfor the ApplicationMaster]]> final tracking URLfor the ApplicationMaster. This is the web-URL to which ResourceManager or web-application proxy will redirect client/users once the application is finished and the ApplicationMaster is gone.

If the passed url has a scheme then that will be used by the ResourceManager and web-application proxy, otherwise the scheme will default to http.

Empty, null, "N/A" strings are all valid besides a real URL. In case an url isn't explicitly passed, it defaults to "N/A" on the ResourceManager.

@param url tracking URLfor the ApplicationMaster]]> The final request includes details such:

  • Final state of the {@code ApplicationMaster}
  • Diagnostic information in case of failure of the {@code ApplicationMaster}
  • Tracking URL
@see ApplicationMasterProtocol#finishApplicationMaster(FinishApplicationMasterRequest)]]>
ResourceManager to a ApplicationMaster on it's completion.

The response, includes:

  • A flag which indicates that the application has successfully unregistered with the RM and the application can safely stop.

Note: The flag indicates whether the application has successfully unregistered and is safe to stop. The application may stop after the flag is true. If the application stops before the flag is true then the RM may retry the application. @see ApplicationMasterProtocol#finishApplicationMaster(FinishApplicationMasterRequest)]]> ApplicationAttemptId of an application attempt. @return ApplicationAttemptId of an application attempt]]> ApplicationAttemptId of an application attempt @param applicationAttemptId ApplicationAttemptId of an application attempt]]> The request sent by a client to the ResourceManager to get an {@link ApplicationAttemptReport} for an application attempt.

The request should include the {@link ApplicationAttemptId} of the application attempt.

@see ApplicationAttemptReport @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]>
ApplicationAttemptReport for the application attempt. @return ApplicationAttemptReport for the application attempt]]> ApplicationAttemptReport for the application attempt. @param applicationAttemptReport ApplicationAttemptReport for the application attempt]]> The response sent by the ResourceManager to a client requesting an application attempt report.

The response includes an {@link ApplicationAttemptReport} which has the details about the particular application attempt

@see ApplicationAttemptReport @see ApplicationHistoryProtocol#getApplicationAttemptReport(GetApplicationAttemptReportRequest)]]>
ApplicationId of an application @return ApplicationId of an application]]> ApplicationId of an application @param applicationId ApplicationId of an application]]> The request from clients to get a list of application attempt reports of an application from the ResourceManager.

@see ApplicationHistoryProtocol#getApplicationAttempts(GetApplicationAttemptsRequest)]]>
ApplicationReport of an application. @return a list of ApplicationReport of an application]]> ApplicationReport of an application. @param applicationAttempts a list of ApplicationReport of an application]]> The response sent by the ResourceManager to a client requesting a list of {@link ApplicationAttemptReport} for application attempts.

The ApplicationAttemptReport for each application includes the details of an application attempt.

@see ApplicationAttemptReport @see ApplicationHistoryProtocol#getApplicationAttempts(GetApplicationAttemptsRequest)]]>
ApplicationId of the application. @return ApplicationId of the application]]> ApplicationId of the application @param applicationId ApplicationId of the application]]> The request sent by a client to the ResourceManager to get an {@link ApplicationReport} for an application.

The request should include the {@link ApplicationId} of the application.

@see ApplicationClientProtocol#getApplicationReport(GetApplicationReportRequest) @see ApplicationReport]]>
ApplicationReport for the application. @return ApplicationReport for the application]]> The response sent by the ResourceManager to a client requesting an application report.

The response includes an {@link ApplicationReport} which has details such as user, queue, name, host on which the ApplicationMaster is running, RPC port, tracking URL, diagnostics, start time etc.

@see ApplicationClientProtocol#getApplicationReport(GetApplicationReportRequest)]]>
The request from clients to get a report of Applications matching the giving application types in the cluster from the ResourceManager.

@see ApplicationClientProtocol#getApplications(GetApplicationsRequest)

Setting any of the parameters to null, would just disable that filter

@param scope {@link ApplicationsRequestScope} to filter by @param users list of users to filter by @param queues list of scheduler queues to filter by @param applicationTypes types of applications @param applicationTags application tags to filter by @param applicationStates application states to filter by @param startRange range of application start times to filter by @param finishRange range of application finish times to filter by @param limit number of applications to limit to @return {@link GetApplicationsRequest} to be used with {@link ApplicationClientProtocol#getApplications(GetApplicationsRequest)}]]>
The request from clients to get a report of Applications matching the giving application types in the cluster from the ResourceManager.

@param scope {@link ApplicationsRequestScope} to filter by @see ApplicationClientProtocol#getApplications(GetApplicationsRequest) @return a report of Applications in {@link GetApplicationsRequest}]]>
The request from clients to get a report of Applications matching the giving application types in the cluster from the ResourceManager.

@see ApplicationClientProtocol#getApplications(GetApplicationsRequest) @return a report of Applications in {@link GetApplicationsRequest}]]>
The request from clients to get a report of Applications matching the giving application states in the cluster from the ResourceManager.

@see ApplicationClientProtocol#getApplications(GetApplicationsRequest) @return a report of Applications in {@link GetApplicationsRequest}]]>
The request from clients to get a report of Applications matching the giving and application types and application types in the cluster from the ResourceManager.

@see ApplicationClientProtocol#getApplications(GetApplicationsRequest) @return a report of Applications in GetApplicationsRequest]]>
The request from clients to get a report of Applications in the cluster from the ResourceManager.

@see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
ApplicationReport for applications. @return ApplicationReport for applications]]> The response sent by the ResourceManager to a client requesting an {@link ApplicationReport} for applications.

The ApplicationReport for each application includes details such as user, queue, name, host on which the ApplicationMaster is running, RPC port, tracking URL, diagnostics, start time etc.

@see ApplicationReport @see ApplicationClientProtocol#getApplications(GetApplicationsRequest)]]>
The request sent by clients to get cluster metrics from the ResourceManager.

Currently, this is empty.

@see ApplicationClientProtocol#getClusterMetrics(GetClusterMetricsRequest)]]>
YarnClusterMetrics for the cluster. @return YarnClusterMetrics for the cluster]]> ResourceManager to a client requesting cluster metrics. @see YarnClusterMetrics @see ApplicationClientProtocol#getClusterMetrics(GetClusterMetricsRequest)]]> The request from clients to get a report of all nodes in the cluster from the ResourceManager.

The request will ask for all nodes in the given {@link NodeState}s. @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]>
NodeReport for all nodes in the cluster. @return NodeReport for all nodes in the cluster]]> The response sent by the ResourceManager to a client requesting a {@link NodeReport} for all nodes.

The NodeReport contains per-node information such as available resources, number of containers, tracking url, rack name, health status etc. @see NodeReport @see ApplicationClientProtocol#getClusterNodes(GetClusterNodesRequest)]]> ContainerId of the Container. @return ContainerId of the Container]]> ContainerId of the container @param containerId ContainerId of the container]]> The request sent by a client to the ResourceManager to get an {@link ContainerReport} for a container.

]]>
ContainerReport for the container. @return ContainerReport for the container]]> The response sent by the ResourceManager to a client requesting a container report.

The response includes a {@link ContainerReport} which has details of a container.

]]>
ApplicationAttemptId of an application attempt. @return ApplicationAttemptId of an application attempt]]> ApplicationAttemptId of an application attempt @param applicationAttemptId ApplicationAttemptId of an application attempt]]> The request from clients to get a list of container reports, which belong to an application attempt from the ResourceManager.

@see ApplicationHistoryProtocol#getContainers(GetContainersRequest)]]>
ContainerReport for all the containers of an application attempt. @return a list of ContainerReport for all the containers of an application attempt]]> ContainerReport for all the containers of an application attempt. @param containers a list of ContainerReport for all the containers of an application attempt]]> The response sent by the ResourceManager to a client requesting a list of {@link ContainerReport} for containers.

The ContainerReport for each container includes the container details.

@see ContainerReport @see ApplicationHistoryProtocol#getContainers(GetContainersRequest)]]>
ContainerIds of containers for which to obtain the ContainerStatus. @return the list of ContainerIds of containers for which to obtain the ContainerStatus.]]> ContainerIds of containers for which to obtain the ContainerStatus @param containerIds a list of ContainerIds of containers for which to obtain the ContainerStatus]]> ApplicationMaster to the NodeManager to get {@link ContainerStatus} of requested containers. @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]> ContainerStatuses of the requested containers. @return ContainerStatuses of the requested containers.]]> NodeManager to the ApplicationMaster when asked to obtain the ContainerStatus of requested containers. @see ContainerManagementProtocol#getContainerStatuses(GetContainerStatusesRequest)]]> The request sent by clients to get a new {@link ApplicationId} for submitting an application.

Currently, this is empty.

@see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]>
new ApplicationId allocated by the ResourceManager. @return new ApplicationId allocated by the ResourceManager]]> ResourceManager in the cluster. @return maximum capability of allocated resources in the cluster]]> The response sent by the ResourceManager to the client for a request to get a new {@link ApplicationId} for submitting applications.

Clients can submit an application with the returned {@link ApplicationId}.

@see ApplicationClientProtocol#getNewApplication(GetNewApplicationRequest)]]>
The request sent by clients to get a new {@code ReservationId} for submitting an reservation.

{@code ApplicationClientProtocol#getNewReservation(GetNewReservationRequest)}]]>
The response sent by the ResourceManager to the client for a request to get a new {@link ReservationId} for submitting reservations.

Clients can submit an reservation with the returned {@link ReservationId}.

{@code ApplicationClientProtocol#getNewReservation(GetNewReservationRequest)}]]>
queue name for which to get queue information. @return queue name for which to get queue information]]> queue name for which to get queue information @param queueName queue name for which to get queue information]]> active applications required? @return true if applications' information is to be included, else false]]> active applications? @param includeApplications fetch information about active applications?]]> child queues required? @return true if information about child queues is required, else false]]> child queues? @param includeChildQueues fetch information about child queues?]]> child queue hierarchy required? @return true if information about entire hierarchy is required, false otherwise]]> child queue hierarchy? @param recursive fetch information on the entire child queue hierarchy?]]> The request sent by clients to get queue information from the ResourceManager.

@see ApplicationClientProtocol#getQueueInfo(GetQueueInfoRequest)]]>
QueueInfo for the specified queue. @return QueueInfo for the specified queue]]> The response includes a {@link QueueInfo} which has details such as queue name, used/total capacities, running applications, child queues etc. @see QueueInfo @see ApplicationClientProtocol#getQueueInfo(GetQueueInfoRequest)]]> The request sent by clients to the ResourceManager to get queue acls for the current user.

Currently, this is empty.

@see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]>
QueueUserACLInfo per queue for the user. @return QueueUserACLInfo per queue for the user]]> The response sent by the ResourceManager to clients seeking queue acls for the user.

The response contains a list of {@link QueueUserACLInfo} which provides information about {@link QueueACL} per queue.

@see QueueACL @see QueueUserACLInfo @see ApplicationClientProtocol#getQueueUserAcls(GetQueueUserAclsInfoRequest)]]>
Note: {@link NMToken} will be used for authenticating communication with {@code NodeManager}. @return the list of container tokens to be used for authorization during container resource increase. @see NMToken]]> AllocateResponse.getIncreasedContainers. The token contains the container id and resource capability required for container resource increase. @param containersToIncrease the list of container tokens to be used for container resource increase.]]> The request sent by Application Master to the Node Manager to change the resource quota of a container.

@see ContainerManagementProtocol#increaseContainersResource(IncreaseContainersResourceRequest)]]>
The response sent by the NodeManager to the ApplicationMaster when asked to increase container resource.

@see ContainerManagementProtocol#increaseContainersResource(IncreaseContainersResourceRequest)]]>
ApplicationId of the application to be aborted. @return ApplicationId of the application to be aborted]]> diagnostics to which the application is being killed. @return diagnostics to which the application is being killed]]> diagnostics to which the application is being killed. @param diagnostics diagnostics to which the application is being killed]]> The request sent by the client to the ResourceManager to abort a submitted application.

The request includes the {@link ApplicationId} of the application to be aborted.

@see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]>
ResourceManager to the client aborting a submitted application.

The response, includes:

  • A flag which indicates that the process of killing the application is completed or not.
Note: user is recommended to wait until this flag becomes true, otherwise if the ResourceManager crashes before the process of killing the application is completed, the ResourceManager may retry this application on recovery. @see ApplicationClientProtocol#forceKillApplication(KillApplicationRequest)]]>
ApplicationId of the application to be moved. @return ApplicationId of the application to be moved]]> ApplicationId of the application to be moved. @param appId ApplicationId of the application to be moved]]> The request sent by the client to the ResourceManager to move a submitted application to a different queue.

The request includes the {@link ApplicationId} of the application to be moved and the queue to place it in.

@see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]>
The response sent by the ResourceManager to the client moving a submitted application to a different queue.

A response without exception means that the move has completed successfully.

@see ApplicationClientProtocol#moveApplicationAcrossQueues(MoveApplicationAcrossQueuesRequest)]]>
RegisterApplicationMasterRequest. If port, trackingUrl is not used, use the following default value:
  • port: -1
  • trackingUrl: null
The port is allowed to be any integer larger than or equal to -1. @return the new instance of RegisterApplicationMasterRequest]]>
host on which the ApplicationMaster is running. @return host on which the ApplicationMaster is running]]> host on which the ApplicationMaster is running. @param host host on which the ApplicationMaster is running]]> RPC port on which the {@code ApplicationMaster} is responding. @return the RPC port on which the {@code ApplicationMaster} is responding]]> RPC port on which the {@code ApplicationMaster} is responding. @param port RPC port on which the {@code ApplicationMaster} is responding]]> tracking URL for the ApplicationMaster. This url if contains scheme then that will be used by resource manager web application proxy otherwise it will default to http. @return tracking URL for the ApplicationMaster]]> tracking URLfor the ApplicationMaster while it is running. This is the web-URL to which ResourceManager or web-application proxy will redirect client/users while the application and the ApplicationMaster are still running.

If the passed url has a scheme then that will be used by the ResourceManager and web-application proxy, otherwise the scheme will default to http.

Empty, null, "N/A" strings are all valid besides a real URL. In case an url isn't explicitly passed, it defaults to "N/A" on the ResourceManager.

@param trackingUrl tracking URLfor the ApplicationMaster]]> The registration includes details such as:

  • Hostname on which the AM is running.
  • RPC Port
  • Tracking URL
@see ApplicationMasterProtocol#registerApplicationMaster(RegisterApplicationMasterRequest)]]>
ResourceManager in the cluster. @return maximum capability of allocated resources in the cluster]]> ApplicationACLs for the application. @return all the ApplicationACLs]]> Get ClientToAMToken master key.

The ClientToAMToken master key is sent to ApplicationMaster by ResourceManager via {@link RegisterApplicationMasterResponse} , used to verify corresponding ClientToAMToken.

@return ClientToAMToken master key]]>
Get the queue that the application was placed in.

@return the queue that the application was placed in.]]> Set the queue that the application was placed in.

]]> Get the list of running containers as viewed by ResourceManager from previous application attempts.

@return the list of running containers as viewed by ResourceManager from previous application attempts @see RegisterApplicationMasterResponse#getNMTokensFromPreviousAttempts()]]>
The response contains critical details such as:
  • Maximum capability for allocated resources in the cluster.
  • {@code ApplicationACL}s for the application.
  • ClientToAMToken master key.
@see ApplicationMasterProtocol#registerApplicationMaster(RegisterApplicationMasterRequest)]]>
ContainerId of the container to re-initialize. @return ContainerId of the container to re-initialize.]]> ContainerLaunchContext to re-initialize the container with. @return ContainerLaunchContext of to re-initialize the container with.]]> ApplicationId of the resource to be released. @return ApplicationId]]> ApplicationId of the resource to be released. @param id ApplicationId]]> key of the resource to be released. @return key]]> key of the resource to be released. @param key unique identifier for the resource]]> The request from clients to release a resource in the shared cache.

]]>
The response to clients from the SharedCacheManager when releasing a resource in the shared cache.

Currently, this is empty.

]]>
The response sent by the ResourceManager to a client on reservation submission.

Currently, this is empty.

{@code ApplicationClientProtocol#submitReservation( ReservationSubmissionRequest)}]]>
ContainerId of the container to localize resources. @return ContainerId of the container to localize resources.]]> LocalResource required by the container. @return all LocalResource required by the container]]> ContainerId of the container to signal. @return ContainerId of the container to signal.]]> ContainerId of the container to signal.]]> SignalContainerCommand of the signal request. @return SignalContainerCommand of the signal request.]]> SignalContainerCommand of the signal request.]]> The request sent by the client to the ResourceManager or by the ApplicationMaster to the NodeManager to signal a container. @see SignalContainerCommand

]]>
The response sent by the ResourceManager to the client signalling a container.

Currently it's empty.

@see ApplicationClientProtocol#signalToContainer(SignalContainerRequest)]]>
ContainerLaunchContext for the container to be started by the NodeManager. @return ContainerLaunchContext for the container to be started by the NodeManager]]> ContainerLaunchContext for the container to be started by the NodeManager @param context ContainerLaunchContext for the container to be started by the NodeManager]]> Note: {@link NMToken} will be used for authenticating communication with {@code NodeManager}. @return the container token to be used for authorization during starting container. @see NMToken @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> The request sent by the ApplicationMaster to the NodeManager to start a container.

The ApplicationMaster has to provide details such as allocated resource capability, security tokens (if enabled), command to be executed to start the container, environment for the process, necessary binaries/jar/shared-objects etc. via the {@link ContainerLaunchContext}.

@see ContainerManagementProtocol#startContainers(StartContainersRequest)]]>
The request which contains a list of {@link StartContainerRequest} sent by the ApplicationMaster to the NodeManager to start containers.

In each {@link StartContainerRequest}, the ApplicationMaster has to provide details such as allocated resource capability, security tokens (if enabled), command to be executed to start the container, environment for the process, necessary binaries/jar/shared-objects etc. via the {@link ContainerLaunchContext}.

@see ContainerManagementProtocol#startContainers(StartContainersRequest)]]>
ContainerId s of the containers that are started successfully. @return the list of ContainerId s of the containers that are started successfully. @see ContainerManagementProtocol#startContainers(StartContainersRequest)]]> Get the meta-data from all auxiliary services running on the NodeManager.

The meta-data is returned as a Map between the auxiliary service names and their corresponding per service meta-data as an opaque blob ByteBuffer

To be able to interpret the per-service meta-data, you should consult the documentation for the Auxiliary-service configured on the NodeManager

@return a Map between the names of auxiliary services and their corresponding meta-data]]>
The response sent by the NodeManager to the ApplicationMaster when asked to start an allocated container.

@see ContainerManagementProtocol#startContainers(StartContainersRequest)]]>
ContainerIds of the containers to be stopped. @return ContainerIds of containers to be stopped]]> ContainerIds of the containers to be stopped. @param containerIds ContainerIds of the containers to be stopped]]> The request sent by the ApplicationMaster to the NodeManager to stop containers.

@see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]>
The response sent by the NodeManager to the ApplicationMaster when asked to stop allocated containers.

@see ContainerManagementProtocol#stopContainers(StopContainersRequest)]]>
ApplicationSubmissionContext for the application. @return ApplicationSubmissionContext for the application]]> ApplicationSubmissionContext for the application. @param context ApplicationSubmissionContext for the application]]> The request sent by a client to submit an application to the ResourceManager.

The request, via {@link ApplicationSubmissionContext}, contains details such as queue, {@link Resource} required to run the ApplicationMaster, the equivalent of {@link ContainerLaunchContext} for launching the ApplicationMaster etc. @see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]> The response sent by the ResourceManager to a client on application submission.

Currently, this is empty.

@see ApplicationClientProtocol#submitApplication(SubmitApplicationRequest)]]>
ApplicationId of the application. @return ApplicationId of the application]]> ApplicationId of the application. @param applicationId ApplicationId of the application]]> Priority of the application to be set. @return Priority of the application to be set.]]> Priority of the application. @param priority Priority of the application]]> The request sent by the client to the ResourceManager to set or update the application priority.

The request includes the {@link ApplicationId} of the application and {@link Priority} to be set for an application

@see ApplicationClientProtocol#updateApplicationPriority(UpdateApplicationPriorityRequest)]]>
Priority of the application to be set. @return Updated Priority of the application.]]> Priority of the application. @param priority Priority of the application]]> The response sent by the ResourceManager to the client on update the application priority.

A response without exception means that the move has completed successfully.

@see ApplicationClientProtocol#updateApplicationPriority(UpdateApplicationPriorityRequest)]]>
ApplicationId of the application. @return ApplicationId of the application]]> ApplicationId of the application. @param applicationId ApplicationId of the application]]> ApplicationTimeouts of the application. Timeout value is in ISO8601 standard with format yyyy-MM-dd'T'HH:mm:ss.SSSZ. @return all ApplicationTimeouts of the application.]]> ApplicationTimeouts for the application. Timeout value is absolute. Timeout value should meet ISO8601 format. Support ISO8601 format is yyyy-MM-dd'T'HH:mm:ss.SSSZ. All pre-existing Map entries are cleared before adding the new Map. @param applicationTimeouts ApplicationTimeoutss for the application]]> The request sent by the client to the ResourceManager to set or update the application timeout.

The request includes the {@link ApplicationId} of the application and timeout to be set for an application

]]>
ApplicationTimeouts of the application. Timeout value is in ISO8601 standard with format yyyy-MM-dd'T'HH:mm:ss.SSSZ. @return all ApplicationTimeouts of the application.]]> ApplicationTimeouts for the application. Timeout value is absolute. Timeout value should meet ISO8601 format. Support ISO8601 format is yyyy-MM-dd'T'HH:mm:ss.SSSZ. All pre-existing Map entries are cleared before adding the new Map. @param applicationTimeouts ApplicationTimeoutss for the application]]> The response sent by the ResourceManager to the client on update application timeout.

A response without exception means that the update has completed successfully.

]]>
ApplicationId of the resource to be used. @return ApplicationId]]> ApplicationId of the resource to be used. @param id ApplicationId]]> key of the resource to be used. @return key]]> key of the resource to be used. @param key unique identifier for the resource]]> The request from clients to the SharedCacheManager that claims a resource in the shared cache.

]]>
Path corresponding to the requested resource in the shared cache. @return String A Path if the resource exists in the shared cache, null otherwise]]> Path corresponding to a resource in the shared cache. @param p A Path corresponding to a resource in the shared cache]]> The response from the SharedCacheManager to the client that indicates whether a requested resource exists in the cache.

]]>
ApplicationId of the ApplicationAttempId. @return ApplicationId of the ApplicationAttempId]]> attempt id of the Application. @return attempt id of the Application]]> ApplicationAttemptId denotes the particular attempt of an ApplicationMaster for a given {@link ApplicationId}.

Multiple attempts might be needed to run an application to completion due to temporal failures of the ApplicationMaster such as hardware failures, connectivity issues etc. on the node on which it was scheduled.

]]>
YarnApplicationAttemptState of the application attempt. @return YarnApplicationAttemptState of the application attempt]]> RPC port of this attempt ApplicationMaster. @return RPC port of this attempt ApplicationMaster]]> host on which this attempt of ApplicationMaster is running. @return host on which this attempt of ApplicationMaster is running]]> diagnositic information of the application attempt in case of errors. @return diagnositic information of the application attempt in case of errors]]> tracking url for the application attempt. @return tracking url for the application attempt]]> original tracking url for the application attempt. @return original tracking url for the application attempt]]> ApplicationAttemptId of this attempt of the application @return ApplicationAttemptId of the attempt]]> ContainerId of AMContainer for this attempt @return ContainerId of the attempt]]> finish time of the application. @return finish time of the application]]> It includes details such as:
  • {@link ApplicationAttemptId} of the application.
  • Host on which the ApplicationMaster of this attempt is running.
  • RPC port of the ApplicationMaster of this attempt.
  • Tracking URL.
  • Diagnostic information in case of errors.
  • {@link YarnApplicationAttemptState} of the application attempt.
  • {@link ContainerId} of the master Container.
]]>
ApplicationId which is unique for all applications started by a particular instance of the ResourceManager. @return short integer identifier of the ApplicationId]]> start time of the ResourceManager which is used to generate globally unique ApplicationId. @return start time of the ResourceManager]]> ApplicationId represents the globally unique identifier for an application.

The globally unique nature of the identifier is achieved by using the cluster timestamp i.e. start-time of the ResourceManager along with a monotonically increasing counter for the application.

]]>
ApplicationId of the application. @return ApplicationId of the application]]> ApplicationAttemptId of the current attempt of the application @return ApplicationAttemptId of the attempt]]> user who submitted the application. @return user who submitted the application]]> queue to which the application was submitted. @return queue to which the application was submitted]]> name of the application. @return name of the application]]> host on which the ApplicationMaster is running. @return host on which the ApplicationMaster is running]]> RPC port of the ApplicationMaster. @return RPC port of the ApplicationMaster]]> client token for communicating with the ApplicationMaster.

ClientToAMToken is the security token used by the AMs to verify authenticity of any client.

The ResourceManager, provides a secure token (via {@link ApplicationReport#getClientToAMToken()}) which is verified by the ApplicationMaster when the client directly talks to an AM.

@return client token for communicating with the ApplicationMaster]]>
YarnApplicationState of the application. @return YarnApplicationState of the application]]> diagnositic information of the application in case of errors. @return diagnositic information of the application in case of errors]]> tracking url for the application. @return tracking url for the application]]> start time of the application. @return start time of the application]]> finish time of the application. @return finish time of the application]]> final finish status of the application. @return final finish status of the application]]> The AMRM token is required for AM to RM scheduling operations. For managed Application Masters Yarn takes care of injecting it. For unmanaged Applications Masters, the token must be obtained via this method and set in the {@link org.apache.hadoop.security.UserGroupInformation} of the current user.

The AMRM token will be returned only if all the following conditions are met:

  • the requester is the owner of the ApplicationMaster
  • the application master is an unmanaged ApplicationMaster
  • the application master is in ACCEPTED state
Else this method returns NULL. @return the AM to RM token if available.]]>
It includes details such as:
  • {@link ApplicationId} of the application.
  • Applications user.
  • Application queue.
  • Application name.
  • Host on which the ApplicationMaster is running.
  • RPC port of the ApplicationMaster.
  • Tracking URL.
  • {@link YarnApplicationState} of the application.
  • Diagnostic information in case of errors.
  • Start time of the application.
  • Client {@link Token} of the application (if security is enabled).
@see ApplicationClientProtocol#getApplicationReport(org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest)]]>
Resource. -1 for invalid/inaccessible reports. @return the used Resource]]> Resource. -1 for invalid/inaccessible reports. @return the reserved Resource]]> Resource. -1 for invalid/inaccessible reports. @return the needed Resource]]> ApplicationId of the submitted application. @return ApplicationId of the submitted application]]> ApplicationId of the submitted application. @param applicationId ApplicationId of the submitted application]]> name. @return application name]]> name. @param applicationName application name]]> queue to which the application is being submitted. @return queue to which the application is being submitted]]> queue to which the application is being submitted @param queue queue to which the application is being submitted]]> Priority of the application. @return Priority of the application]]> ContainerLaunchContext to describe the Container with which the ApplicationMaster is launched. @return ContainerLaunchContext for the ApplicationMaster container]]> ContainerLaunchContext to describe the Container with which the ApplicationMaster is launched. @param amContainer ContainerLaunchContext for the ApplicationMaster container]]> YarnApplicationState. Such apps will not be retried by the RM on app attempt failure. The default value is false. @return true if the AM is not managed by the RM]]> ApplicationMaster for this application. Please note this will be DEPRECATED, use getResource in getAMContainerResourceRequest instead. @return the resource required by the ApplicationMaster for this application.]]> ApplicationMaster for this application. @param resource the resource required by the ApplicationMaster for this application.]]> For managed AM, if the flag is true, running containers will not be killed when application attempt fails and these containers will be retrieved by the new application attempt on registration via {@link ApplicationMasterProtocol#registerApplicationMaster(RegisterApplicationMasterRequest)}.

For unmanaged AM, if the flag is true, RM allows re-register and returns the running containers in the same attempt back to the UAM for HA.

@param keepContainers the flag which indicates whether to keep containers across application attempts.]]>
getResource and getPriority of ApplicationSubmissionContext. Number of containers and Priority will be ignored. @return ResourceRequest of the AM container @deprecated See {@link #getAMContainerResourceRequests()}]]> getAMContainerResourceRequest and its behavior. Number of containers and Priority will be ignored. @return List of ResourceRequests of the AM container]]> LogAggregationContext of the application @return LogAggregationContext of the application]]> LogAggregationContext for the application @param logAggregationContext for the application]]> ApplicationTimeouts of the application. Timeout value is in seconds. @return all ApplicationTimeouts of the application.]]> ApplicationTimeouts for the application in seconds. All pre-existing Map entries are cleared before adding the new Map.

Note: If application timeout value is less than or equal to zero then application submission will throw an exception.

@param applicationTimeouts ApplicationTimeoutss for the application]]>
It includes details such as:
  • {@link ApplicationId} of the application.
  • Application user.
  • Application name.
  • {@link Priority} of the application.
  • {@link ContainerLaunchContext} of the container in which the ApplicationMaster is executed.
  • maxAppAttempts. The maximum number of application attempts. It should be no larger than the global number of max attempts in the Yarn configuration.
  • attemptFailuresValidityInterval. The default value is -1. when attemptFailuresValidityInterval in milliseconds is set to {@literal >} 0, the failure number will no take failures which happen out of the validityInterval into failure count. If failure count reaches to maxAppAttempts, the application will be failed.
  • Optional, application-specific {@link LogAggregationContext}
@see ContainerLaunchContext @see ApplicationClientProtocol#submitApplication(org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationRequest)]]>
expiryTime for given timeout type. @return expiryTime in ISO8601 standard with format yyyy-MM-dd'T'HH:mm:ss.SSSZ.]]> expiryTime for given timeout type. @param expiryTime in ISO8601 standard with format yyyy-MM-dd'T'HH:mm:ss.SSSZ.]]> Remaining Time of an application for given timeout type. @return Remaining Time in seconds.]]> Remaining Time of an application for given timeout type. @param remainingTime in seconds.]]>
  • {@link ApplicationTimeoutType} of the timeout type.
  • Expiry time in ISO8601 standard with format yyyy-MM-dd'T'HH:mm:ss.SSSZ or "UNLIMITED".
  • Remaining time in seconds.
  • The possible values for {ExpiryTime, RemainingTimeInSeconds} are
    • {UNLIMITED,-1} : Timeout is not configured for given timeout type (LIFETIME).
    • {ISO8601 date string, 0} : Timeout is configured and application has completed.
    • {ISO8601 date string, greater than zero} : Timeout is configured and application is RUNNING. Application will be timed out after configured value.
    ]]>
    Resource allocated to the container. @return Resource allocated to the container]]> Priority at which the Container was allocated. @return Priority at which the Container was allocated]]> ContainerToken for the container.

    ContainerToken is the security token used by the framework to verify authenticity of any Container.

    The ResourceManager, on container allocation provides a secure token which is verified by the NodeManager on container launch.

    Applications do not need to care about ContainerToken, they are transparently handled by the framework - the allocated Container includes the ContainerToken.

    @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest) @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest) @return ContainerToken for the container]]>
    ID corresponding to the original {@code ResourceRequest{@link #getAllocationRequestId()}}s which is satisfied by this allocated {@code Container}.

    The scheduler may return multiple {@code AllocateResponse}s corresponding to the same ID as and when scheduler allocates {@code Container}s. Applications can continue to completely ignore the returned ID in the response and use the allocation for any of their outstanding requests.

    @return the ID corresponding to the original allocation request which is satisfied by this allocation.]]> The {@code ResourceManager} is the sole authority to allocate any {@code Container} to applications. The allocated {@code Container} is always on a single node and has a unique {@link ContainerId}. It has a specific amount of {@link Resource} allocated.

    It includes details such as:

    • {@link ContainerId} for the container, which is globally unique.
    • {@link NodeId} of the node on which it is allocated.
    • HTTP uri of the node.
    • {@link Resource} allocated to the container.
    • {@link Priority} at which the container was allocated.
    • Container {@link Token} of the container, used to securely verify authenticity of the allocation.
    Typically, an {@code ApplicationMaster} receives the {@code Container} from the {@code ResourceManager} during resource-negotiation and then talks to the {@code NodeManager} to start/stop containers. @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest) @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest) @see ContainerManagementProtocol#stopContainers(org.apache.hadoop.yarn.api.protocolrecords.StopContainersRequest)]]>
    ApplicationAttemptId of the application to which the Container was assigned.

    Note: If containers are kept alive across application attempts via {@link ApplicationSubmissionContext#setKeepContainersAcrossApplicationAttempts(boolean)} the ContainerId does not necessarily contain the current running application attempt's ApplicationAttemptId This container can be allocated by previously exited application attempt and managed by the current running attempt thus have the previous application attempt's ApplicationAttemptId.

    @return ApplicationAttemptId of the application to which the Container was assigned]]>
    ContainerId, which doesn't include epoch. Note that this method will be marked as deprecated, so please use getContainerId instead. @return lower 32 bits of identifier of the ContainerId]]> ContainerId. Upper 24 bits are reserved as epoch of cluster, and lower 40 bits are reserved as sequential number of containers. @return identifier of the ContainerId]]> ContainerId represents a globally unique identifier for a {@link Container} in the cluster.

    ]]>
    LocalResource required by the container. @return all LocalResource required by the container]]> LocalResource required by the container. All pre-existing Map entries are cleared before adding the new Map @param localResources LocalResource required by the container]]> Get application-specific binary service data. This is a map keyed by the name of each {@link AuxiliaryService} that is configured on a NodeManager and value correspond to the application specific data targeted for the keyed {@link AuxiliaryService}.

    This will be used to initialize this application on the specific {@link AuxiliaryService} running on the NodeManager by calling {@link AuxiliaryService#initializeApplication(ApplicationInitializationContext)}

    @return application-specific binary service data]]>
    Set application-specific binary service data. This is a map keyed by the name of each {@link AuxiliaryService} that is configured on a NodeManager and value correspond to the application specific data targeted for the keyed {@link AuxiliaryService}. All pre-existing Map entries are preserved.

    @param serviceData application-specific binary service data]]>
    environment variables for the container. @return environment variables for the container]]> environment variables for the container. All pre-existing Map entries are cleared before adding the new Map @param environment environment variables for the container]]> commands for launching the container. @return the list of commands for launching the container]]> commands for launching the container. All pre-existing List entries are cleared before adding the new List @param commands the list of commands for launching the container]]> ApplicationACLs for the application. @return all the ApplicationACLs]]> ApplicationACLs for the application. All pre-existing Map entries are cleared before adding the new Map @param acls ApplicationACLs for the application]]> ContainerRetryContext to relaunch container. @return ContainerRetryContext to relaunch container.]]> ContainerRetryContext to relaunch container. @param containerRetryContext ContainerRetryContext to relaunch container.]]> It includes details such as:
    • {@link ContainerId} of the container.
    • {@link Resource} allocated to the container.
    • User to whom the container is allocated.
    • Security tokens (if security is enabled).
    • {@link LocalResource} necessary for running the container such as binaries, jar, shared-objects, side-files etc.
    • Optional, application-specific binary service data.
    • Environment variables for the launched process.
    • Command to launch the container.
    • Retry strategy when container exits with failure.
    @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]>
    ContainerId of the container. @return ContainerId of the container.]]> Resource of the container. @return allocated Resource of the container.]]> NodeId where container is running. @return allocated NodeId where container is running.]]> Priority of the container. @return allocated Priority of the container.]]> ContainerState of the container. @return final ContainerState of the container.]]> exit status of the container. @return final exit status of the container.]]> It includes details such as:
    • {@link ContainerId} of the container.
    • Allocated Resources to the container.
    • Assigned Node id.
    • Assigned Priority.
    • Creation Time.
    • Finish Time.
    • Container Exit Status.
    • {@link ContainerState} of the container.
    • Diagnostic information in case of errors.
    • Log URL.
    • nodeHttpAddress
    ]]>
    It provides details such as:
    • {@link ContainerRetryPolicy} : - NEVER_RETRY(DEFAULT value): no matter what error code is when container fails to run, just do not retry. - RETRY_ON_ALL_ERRORS: no matter what error code is, when container fails to run, just retry. - RETRY_ON_SPECIFIC_ERROR_CODES: when container fails to run, do retry if the error code is one of errorCodes, otherwise do not retry. Note: if error code is 137(SIGKILL) or 143(SIGTERM), it will not retry because it is usually killed on purpose.
    • maxRetries specifies how many times to retry if need to retry. If the value is -1, it means retry forever.
    • retryInterval specifies delaying some time before relaunch container, the unit is millisecond.
    ]]>
    Retry policy for relaunching a Container.

    ]]>
    State of a Container.

    ]]>
    ContainerId of the container. @return ContainerId of the container]]> ExecutionType of the container. @return ExecutionType of the container]]> ContainerState of the container. @return ContainerState of the container]]> Get the exit status for the container.

    Note: This is valid only for completed containers i.e. containers with state {@link ContainerState#COMPLETE}. Otherwise, it returns an ContainerExitStatus.INVALID.

    Containers killed by the framework, either due to being released by the application or being 'lost' due to node failures etc. have a special exit code of ContainerExitStatus.ABORTED.

    When threshold number of the nodemanager-local-directories or threshold number of the nodemanager-log-directories become bad, then container is not launched and is exited with ContainersExitStatus.DISKS_FAILED.

    @return exit status for the container]]>
    diagnostic messages for failed containers. @return diagnostic messages for failed containers]]> Resource allocated to the container. @return Resource allocated to the container]]> It provides details such as:
    • {@code ContainerId} of the container.
    • {@code ExecutionType} of the container.
    • {@code ContainerState} of the container.
    • Exit status of a completed container.
    • Diagnostic message for a failed container.
    • {@link Resource} allocated to the container.
    ]]>
    The execution types are the following:
    • {@link #GUARANTEED} - this container is guaranteed to start its execution, once the corresponding start container request is received by an NM.
    • {@link #OPPORTUNISTIC} - the execution of this container may not start immediately at the NM that receives the corresponding start container request (depending on the NM's available resources). Moreover, it may be preempted if it blocks a GUARANTEED container from being executed.
    ]]>
    ExecutionType of the requested container. @param execType ExecutionType of the requested container]]> ExecutionType. @return ExecutionType.]]> ResourceRequest. Defaults to false. @return whether ExecutionType request should be strictly honored]]> ExecutionType as well as flag that explicitly asks the configuredScheduler to return Containers of exactly the Execution Type requested.]]> Application.]]> location of the resource to be localized. @return location of the resource to be localized]]> location of the resource to be localized. @param resource location of the resource to be localized]]> size of the resource to be localized. @return size of the resource to be localized]]> size of the resource to be localized. @param size size of the resource to be localized]]> timestamp of the resource to be localized, used for verification. @return timestamp of the resource to be localized]]> timestamp of the resource to be localized, used for verification. @param timestamp timestamp of the resource to be localized]]> LocalResourceType of the resource to be localized. @return LocalResourceType of the resource to be localized]]> LocalResourceType of the resource to be localized. @param type LocalResourceType of the resource to be localized]]> LocalResourceVisibility of the resource to be localized. @return LocalResourceVisibility of the resource to be localized]]> LocalResourceVisibility of the resource to be localized. @param visibility LocalResourceVisibility of the resource to be localized]]> pattern that should be used to extract entries from the archive (only used when type is PATTERN). @return pattern that should be used to extract entries from the archive.]]> pattern that should be used to extract entries from the archive (only used when type is PATTERN). @param pattern pattern that should be used to extract entries from the archive.]]> shouldBeUploadedToSharedCache of this request]]> LocalResource represents a local resource required to run a container.

    The NodeManager is responsible for localizing the resource prior to launching the container.

    Applications can specify {@link LocalResourceType} and {@link LocalResourceVisibility}.

    @see LocalResourceType @see LocalResourceVisibility @see ContainerLaunchContext @see ApplicationSubmissionContext @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]>
    type of a resource localized by the {@code NodeManager}.

    The type can be one of:

    • {@link #FILE} - Regular file i.e. uninterpreted bytes.
    • {@link #ARCHIVE} - Archive, which is automatically unarchived by the NodeManager.
    • {@link #PATTERN} - A hybrid between {@link #ARCHIVE} and {@link #FILE}.
    @see LocalResource @see ContainerLaunchContext @see ApplicationSubmissionContext @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]>
    visibility of a resource localized by the {@code NodeManager}.

    The visibility can be one of:

    • {@link #PUBLIC} - Shared by all users on the node.
    • {@link #PRIVATE} - Shared among all applications of the same user on the node.
    • {@link #APPLICATION} - Shared only among containers of the same application on the node.
    @see LocalResource @see ContainerLaunchContext @see ApplicationSubmissionContext @see ContainerManagementProtocol#startContainers(org.apache.hadoop.yarn.api.protocolrecords.StartContainersRequest)]]>
    It includes details such as:
    • includePattern. It uses Java Regex to filter the log files which match the defined include pattern and those log files will be uploaded when the application finishes.
    • excludePattern. It uses Java Regex to filter the log files which match the defined exclude pattern and those log files will not be uploaded when application finishes. If the log file name matches both the include and the exclude pattern, this file will be excluded eventually.
    • rolledLogsIncludePattern. It uses Java Regex to filter the log files which match the defined include pattern and those log files will be aggregated in a rolling fashion.
    • rolledLogsExcludePattern. It uses Java Regex to filter the log files which match the defined exclude pattern and those log files will not be aggregated in a rolling fashion. If the log file name matches both the include and the exclude pattern, this file will be excluded eventually.
    • policyClassName. The policy class name that implements ContainerLogAggregationPolicy. At runtime, nodemanager will the policy if a given container's log should be aggregated based on the ContainerType and other runtime state such as exit code by calling ContainerLogAggregationPolicy#shouldDoLogAggregation. This is useful when the app only wants to aggregate logs of a subset of containers. Here are the available policies. Please make sure to specify the canonical name by prefixing org.apache.hadoop.yarn.server. nodemanager.containermanager.logaggregation. to the class simple name below. NoneContainerLogAggregationPolicy: skip aggregation for all containers. AllContainerLogAggregationPolicy: aggregate all containers. AMOrFailedContainerLogAggregationPolicy: aggregate application master or failed containers. FailedOrKilledContainerLogAggregationPolicy: aggregate failed or killed containers FailedContainerLogAggregationPolicy: aggregate failed containers AMOnlyLogAggregationPolicy: aggregate application master containers SampleContainerLogAggregationPolicy: sample logs of successful worker containers, in addition to application master and failed/killed containers. If it isn't specified, it will use the cluster-wide default policy defined by configuration yarn.nodemanager.log-aggregation.policy.class. The default value of yarn.nodemanager.log-aggregation.policy.class is AllContainerLogAggregationPolicy.
    • policyParameters. The parameters passed to the policy class via ContainerLogAggregationPolicy#parseParameters during the policy object initialization. This is optional. Some policy class might use parameters to adjust its settings. It is up to policy class to define the scheme of parameters. For example, SampleContainerLogAggregationPolicy supports the format of "SR:0.5,MIN:50", which means sample rate of 50% beyond the first 50 successful worker containers.
    @see ApplicationSubmissionContext]]>
    NodeManager for which the NMToken is used to authenticate. @return the {@link NodeId} of the NodeManager for which the NMToken is used to authenticate.]]> NodeManager @return the {@link Token} used for authenticating with NodeManager]]> The NMToken is used for authenticating communication with NodeManager

    It is issued by ResourceMananger when ApplicationMaster negotiates resource with ResourceManager and validated on NodeManager side.

    @see AllocateResponse#getNMTokens()]]>
    hostname of the node. @return hostname of the node]]> port for communicating with the node. @return port for communicating with the node]]> NodeId is the unique identifier for a node.

    It includes the hostname and port to uniquely identify the node. Thus, it is unique across restarts of any NodeManager.

    ]]>
    NodeId of the node. @return NodeId of the node]]> NodeState of the node. @return NodeState of the node]]> http address of the node. @return http address of the node]]> rack name for the node. @return rack name for the node]]> used Resource on the node. @return used Resource on the node]]> total Resource on the node. @return total Resource on the node]]> diagnostic health report of the node. @return diagnostic health report of the node]]> last timestamp at which the health report was received. @return last timestamp at which the health report was received]]> It includes details such as:
    • {@link NodeId} of the node.
    • HTTP Tracking URL of the node.
    • Rack name for the node.
    • Used {@link Resource} on the node.
    • Total available {@link Resource} of the node.
    • Number of running containers on the node.
    @see ApplicationClientProtocol#getClusterNodes(org.apache.hadoop.yarn.api.protocolrecords.GetClusterNodesRequest)]]>
    State of a Node.

    ]]>
    ResourceManager. @see PreemptionContract @see StrictPreemptionContract]]> ApplicationMaster about resources requested back by the ResourceManager. @see AllocateRequest#setAskList(List)]]> ApplicationMaster that may be reclaimed by the ResourceManager. If the AM prefers a different set of containers, then it may checkpoint or kill containers matching the description in {@link #getResourceRequest}. @return Set of containers at risk if the contract is not met.]]> ResourceManager. The ApplicationMaster (AM) can satisfy this request according to its own priorities to prevent containers from being forcibly killed by the platform. @see PreemptionMessage]]> ResourceManager]]> The AM should decode both parts of the message. The {@link StrictPreemptionContract} specifies particular allocations that the RM requires back. The AM can checkpoint containers' state, adjust its execution plan to move the computation, or take no action and hope that conditions that caused the RM to ask for the container will change.

    In contrast, the {@link PreemptionContract} also includes a description of resources with a set of containers. If the AM releases containers matching that profile, then the containers enumerated in {@link PreemptionContract#getContainers()} may not be killed.

    Each preemption message reflects the RM's current understanding of the cluster state, so a request to return N containers may not reflect containers the AM is releasing, recently exited containers the RM has yet to learn about, or new containers allocated before the message was generated. Conversely, an RM may request a different profile of containers in subsequent requests.

    The policy enforced by the RM is part of the scheduler. Generally, only containers that have been requested consistently should be killed, but the details are not specified.]]> The ACL is one of:

    • {@link #SUBMIT_APPLICATIONS} - ACL to submit applications to the queue.
    • {@link #ADMINISTER_QUEUE} - ACL to administer the queue.
    @see QueueInfo @see ApplicationClientProtocol#getQueueUserAcls(org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoRequest)]]>
    name of the queue. @return name of the queue]]> configured capacity of the queue. @return configured capacity of the queue]]> maximum capacity of the queue. @return maximum capacity of the queue]]> current capacity of the queue. @return current capacity of the queue]]> child queues of the queue. @return child queues of the queue]]> running applications of the queue. @return running applications of the queue]]> QueueState of the queue. @return QueueState of the queue]]> accessible node labels of the queue. @return accessible node labels of the queue]]> default node label expression of the queue, this takes affect only when the ApplicationSubmissionContext and ResourceRequest don't specify their NodeLabelExpression. @return default node label expression of the queue]]> queue stats for the queue @return queue stats of the queue]]> preemption status of the queue. @return if property is not in proto, return null; otherwise, return preemption status of the queue]]> It includes information such as:
    • Queue name.
    • Capacity of the queue.
    • Maximum capacity of the queue.
    • Current capacity of the queue.
    • Child queues.
    • Running applications.
    • {@link QueueState} of the queue.
    • {@link QueueConfigurations} of the queue.
    @see QueueState @see QueueConfigurations @see ApplicationClientProtocol#getQueueInfo(org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest)]]>
    A queue is in one of:
    • {@link #RUNNING} - normal state.
    • {@link #STOPPED} - not accepting new application submissions.
    • {@link #DRAINING} - not accepting new application submissions and waiting for applications finish.
    @see QueueInfo @see ApplicationClientProtocol#getQueueInfo(org.apache.hadoop.yarn.api.protocolrecords.GetQueueInfoRequest)]]>
    queue name of the queue. @return queue name of the queue]]> QueueACL for the given user. @return list of QueueACL for the given user]]> QueueUserACLInfo provides information {@link QueueACL} for the given user.

    @see QueueACL @see ApplicationClientProtocol#getQueueUserAcls(org.apache.hadoop.yarn.api.protocolrecords.GetQueueUserAclsInfoRequest)]]>
    The ACL is one of:
    • {@link #ADMINISTER_RESERVATIONS} - ACL to create, list, update and delete reservations.
    • {@link #LIST_RESERVATIONS} - ACL to list reservations.
    • {@link #SUBMIT_RESERVATIONS} - ACL to create reservations.
    Users can always list, update and delete their own reservations.]]>
    It includes:
    • Duration of the reservation.
    • Acceptance time of the duration.
    • List of {@link ResourceAllocationRequest}, which includes the time interval, and capability of the allocation. {@code ResourceAllocationRequest} represents an allocation made for a reservation for the current state of the queue. This can be changed for reasons such as re-planning, but will always be subject to the constraints of the user contract as described by {@link ReservationDefinition}
    • {@link ReservationId} of the reservation.
    • {@link ReservationDefinition} used to make the reservation.
    @see ResourceAllocationRequest @see ReservationId @see ReservationDefinition]]>
    start time of the {@code ResourceManager} which is used to generate globally unique {@link ReservationId}. @return start time of the {@code ResourceManager}]]> {@link ReservationId} represents the globally unique identifier for a reservation.

    The globally unique nature of the identifier is achieved by using the cluster timestamp i.e. start-time of the {@code ResourceManager} along with a monotonically increasing counter for the reservation.

    ]]>
    It includes:
    • {@link Resource} required for each request.
    • Number of containers, of above specifications, which are required by the application.
    • Concurrency that indicates the gang size of the request.
    ]]>
    memory of the resource. Note - while memory has never had a unit specified, all YARN configurations have specified memory in MB. The assumption has been that the daemons and applications are always using the same units. With the introduction of the ResourceInformation class we have support for units - so this function will continue to return memory but in the units of MB @return memory(in MB) of the resource]]> memory of the resource. Note - while memory has never had a unit specified, all YARN configurations have specified memory in MB. The assumption has been that the daemons and applications are always using the same units. With the introduction of the ResourceInformation class we have support for units - so this function will continue to return memory but in the units of MB @return memory of the resource]]> memory of the resource. Note - while memory has never had a unit specified, all YARN configurations have specified memory in MB. The assumption has been that the daemons and applications are always using the same units. With the introduction of the ResourceInformation class we have support for units - so this function will continue to set memory but the assumption is that the value passed is in units of MB. @param memory memory(in MB) of the resource]]> memory of the resource. @param memory memory of the resource]]> number of virtual cpu cores of the resource. Virtual cores are a unit for expressing CPU parallelism. A node's capacity should be configured with virtual cores equal to its number of physical cores. A container should be requested with the number of cores it can saturate, i.e. the average number of threads it expects to have runnable at a time. @return num of virtual cpu cores of the resource]]> number of virtual cpu cores of the resource. Virtual cores are a unit for expressing CPU parallelism. A node's capacity should be configured with virtual cores equal to its number of physical cores. A container should be requested with the number of cores it can saturate, i.e. the average number of threads it expects to have runnable at a time. @param vCores number of virtual cpu cores of the resource]]> Resource models a set of computer resources in the cluster.

    Currently it models both memory and CPU.

    The unit for memory is megabytes. CPU is modeled with virtual cores (vcores), a unit for expressing parallelism. A node's capacity should be configured with virtual cores equal to its number of physical cores. A container should be requested with the number of cores it can saturate, i.e. the average number of threads it expects to have runnable at a time.

    Virtual cores take integer values and thus currently CPU-scheduling is very coarse. A complementary axis for CPU requests that represents processing power will likely be added in the future to enable finer-grained resource configuration.

    Typically, applications request Resource of suitable capability to run their component tasks.

    @see ResourceRequest @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]>
    It includes:
    • StartTime of the allocation.
    • EndTime of the allocation.
    • {@link Resource} reserved for the allocation.
    @see Resource]]>
    blacklist of resources for the application. @see ResourceRequest @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]> host/rack string represents an arbitrary host name. @param hostName host/rack on which the allocation is desired @return whether the given host/rack string represents an arbitrary host name]]> Priority of the request. @return Priority of the request]]> Priority of the request @param priority Priority of the request]]> host/rack) on which the allocation is desired. A special value of * signifies that any resource (host/rack) is acceptable. @return resource (e.g. host/rack) on which the allocation is desired]]> host/rack) on which the allocation is desired. A special value of * signifies that any resource name (e.g. host/rack) is acceptable. @param resourceName (e.g. host/rack) on which the allocation is desired]]> ResourceRequest. Defaults to true. @return whether locality relaxation is enabled with this ResourceRequest.]]> ExecutionTypeRequest of the requested container. @param execSpec ExecutionTypeRequest of the requested container]]> ResourceRequest. Defaults to true. @return whether locality relaxation is enabled with this ResourceRequest.]]> For a request at a network hierarchy level, set whether locality can be relaxed to that level and beyond.

    If the flag is off on a rack-level ResourceRequest, containers at that request's priority will not be assigned to nodes on that request's rack unless requests specifically for those nodes have also been submitted.

    If the flag is off on an {@link ResourceRequest#ANY}-level ResourceRequest, containers at that request's priority will only be assigned on racks for which specific requests have also been submitted.

    For example, to request a container strictly on a specific node, the corresponding rack-level and any-level requests should have locality relaxation set to false. Similarly, to request a container strictly on a specific rack, the corresponding any-level request should have locality relaxation set to false.

    @param relaxLocality whether locality relaxation is enabled with this ResourceRequest.]]> ID corresponding to this allocation request. This ID is an identifier for different {@code ResourceRequest}s from the same application. The allocated {@code Container}(s) received as part of the {@code AllocateResponse} response will have the ID corresponding to the original {@code ResourceRequest} for which the RM made the allocation.

    The scheduler may return multiple {@code AllocateResponse}s corresponding to the same ID as and when scheduler allocates {@code Container}(s). Applications can continue to completely ignore the returned ID in the response and use the allocation for any of their outstanding requests.

    If one wishes to replace an entire {@code ResourceRequest} corresponding to a specific ID, they can simply cancel the corresponding {@code ResourceRequest} and submit a new one afresh. @return the ID corresponding to this allocation request.]]> ID corresponding to this allocation request. This ID is an identifier for different {@code ResourceRequest}s from the same application. The allocated {@code Container}(s) received as part of the {@code AllocateResponse} response will have the ID corresponding to the original {@code ResourceRequest} for which the RM made the allocation.

    The scheduler may return multiple {@code AllocateResponse}s corresponding to the same ID as and when scheduler allocates {@code Container}(s). Applications can continue to completely ignore the returned ID in the response and use the allocation for any of their outstanding requests.

    If one wishes to replace an entire {@code ResourceRequest} corresponding to a specific ID, they can simply cancel the corresponding {@code ResourceRequest} and submit a new one afresh.

    If the ID is not set, scheduler will continue to work as previously and all allocated {@code Container}(s) will have the default ID, -1. @param allocationRequestID the ID corresponding to this allocation request.]]> Resource capability of the request. @param capability Resource capability of the request]]> Resource capability of the request. @return Resource capability of the request]]> It includes:

    • {@link Priority} of the request.
    • The name of the host or rack on which the allocation is desired. A special value of * signifies that any host/rack is acceptable to the application.
    • {@link Resource} required for each request.
    • Number of containers, of above specifications, which are required by the application.
    • A boolean relaxLocality flag, defaulting to {@code true}, which tells the {@code ResourceManager} if the application wants locality to be loose (i.e. allows fall-through to rack or any) or strict (i.e. specify hard constraint on resource allocation).
    @see Resource @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]>
    priority of the request. @see ResourceRequest#setPriority(Priority) @param priority priority of the request @return {@link ResourceRequestBuilder}]]> resourceName of the request. @see ResourceRequest#setResourceName(String) @param resourceName resourceName of the request @return {@link ResourceRequestBuilder}]]> capability of the request. @see ResourceRequest#setCapability(Resource) @param capability capability of the request @return {@link ResourceRequestBuilder}]]> numContainers of the request. @see ResourceRequest#setNumContainers(int) @param numContainers numContainers of the request @return {@link ResourceRequestBuilder}]]> relaxLocality of the request. @see ResourceRequest#setRelaxLocality(boolean) @param relaxLocality relaxLocality of the request @return {@link ResourceRequestBuilder}]]> nodeLabelExpression of the request. @see ResourceRequest#setNodeLabelExpression(String) @param nodeLabelExpression nodeLabelExpression of the request @return {@link ResourceRequestBuilder}]]> executionTypeRequest of the request. @see ResourceRequest#setExecutionTypeRequest( ExecutionTypeRequest) @param executionTypeRequest executionTypeRequest of the request @return {@link ResourceRequestBuilder}]]> executionTypeRequest of the request with 'ensure execution type' flag set to true. @see ResourceRequest#setExecutionTypeRequest( ExecutionTypeRequest) @param executionType executionType of the request. @return {@link ResourceRequestBuilder}]]> allocationRequestId of the request. @see ResourceRequest#setAllocationRequestId(long) @param allocationRequestId allocationRequestId of the request @return {@link ResourceRequestBuilder}]]> virtual memory. @return virtual memory in MB]]> virtual memory. @param vmem virtual memory in MB]]> physical memory. @return physical memory in MB]]> physical memory. @param pmem physical memory in MB]]> CPU utilization. @return CPU utilization normalized to 1 CPU]]> CPU utilization. @param cpu CPU utilization normalized to 1 CPU]]> ResourceUtilization models the utilization of a set of computer resources in the cluster.

    ]]>
    ApplicationMaster that may be reclaimed by the ResourceManager. @return the set of {@link ContainerId} to be preempted.]]> ApplicationMaster (AM) may attempt to checkpoint work or adjust its execution plan to accommodate it. In contrast to {@link PreemptionContract}, the AM has no flexibility in selecting which resources to return to the cluster. @see PreemptionMessage]]> Token is the security entity used by the framework to verify authenticity of any resource.

    ]]>
    ContainerId of the container. @return ContainerId of the container]]> ContainerUpdateType of the container. @return ContainerUpdateType of the container.]]> ContainerUpdateType of the container. @param updateType of the Container]]> ContainerId of the container. @return ContainerId of the container]]> ContainerId of the container. @param containerId ContainerId of the container]]> ExecutionType of the container. @return ExecutionType of the container]]> ExecutionType of the container. @param executionType ExecutionType of the container]]> Resource capability of the request. @param capability Resource capability of the request]]> Resource capability of the request. @return Resource capability of the request]]> It includes:
    • version for the container.
    • {@link ContainerId} for the container.
    • {@link Resource} capability of the container after the update request is completed.
    • {@link ExecutionType} of the container after the update request is completed.
    Update rules:
    • Currently only ONE aspect of the container can be updated per request (user can either update Capability OR ExecutionType in one request.. not both).
    • There must be only 1 update request per container in an allocate call.
    • If a new update request is sent for a container (in a subsequent allocate call) before the first one is satisfied by the Scheduler, it will overwrite the previous request.
    @see ApplicationMasterProtocol#allocate(org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest)]]>
    ContainerUpdateType. @return ContainerUpdateType]]> ContainerUpdateType. @param updateType ContainerUpdateType]]> Container. @return Container]]> Container. @param container Container]]> URL represents a serializable {@link java.net.URL}.

    ]]>
    RMAppAttempt.]]> ApplicationMaster.]]> NodeManagers in the cluster. @return number of NodeManagers in the cluster]]> DecommissionedNodeManagers in the cluster. @return number of DecommissionedNodeManagers in the cluster]]> ActiveNodeManagers in the cluster. @return number of ActiveNodeManagers in the cluster]]> LostNodeManagers in the cluster. @return number of LostNodeManagers in the cluster]]> UnhealthyNodeManagers in the cluster. @return number of UnhealthyNodeManagers in the cluster]]> RebootedNodeManagers in the cluster. @return number of RebootedNodeManagers in the cluster]]> YarnClusterMetrics represents cluster metrics.

    Currently only number of NodeManagers is provided.

    ]]>
    This class contains the information about a timeline domain, which is used to a user to host a number of timeline entities, isolating them from others'. The user can also define the reader and writer users/groups for the the domain, which is used to control the access to its entities.

    The reader and writer users/groups pattern that the user can supply is the same as what AccessControlList takes.

    ]]>
    The class that contains the the meta information of some conceptual entity and its related events. The entity can be an application, an application attempt, a container or whatever the user-defined object.

    Primary filters will be used to index the entities in TimelineStore, such that users should carefully choose the information they want to store as the primary filters. The remaining can be stored as other information.

    ]]>
    ApplicationId of the TimelineEntityGroupId. @return ApplicationId of the TimelineEntityGroupId]]> timelineEntityGroupId. @return timelineEntityGroupId]]> TimelineEntityGroupId is an abstract way for timeline service users to represent #a group of related timeline data. For example, all entities that represents one data flow DAG execution can be grouped into one timeline entity group.

    ]]>
    The constuctor is used to construct a proxy {@link TimelineEntity} or its subclass object from the real entity object that carries information.

    It is usually used in the case where we want to recover class polymorphism after deserializing the entity from its JSON form.

    @param entity the real entity that carries information]]>
    Note: Entities will be stored in the order of idPrefix specified. If users decide to set idPrefix for an entity, they MUST provide the same prefix for every update of this entity.

    Example:
     TimelineEntity entity = new TimelineEntity();
     entity.setIdPrefix(value);
     
    Users can use {@link TimelineServiceHelper#invertLong(long)} to invert the prefix if necessary. @param entityIdPrefix prefix for an entity.]]>
    name property as a InetSocketAddress. On an HA cluster, this fetches the address corresponding to the RM identified by {@link #RM_HA_ID}. @param name property name. @param defaultAddress the default value @param defaultPort the default port @return InetSocketAddress]]> OPPORTUNISTIC containers on the NM.]]>
  • default
  • docker
  • ]]>
    Default platform-specific CLASSPATH for YARN applications. A comma-separated list of CLASSPATH entries constructed based on the client OS environment expansion syntax.

    Note: Use {@link #DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH} for cross-platform practice i.e. submit an application from a Windows client to a Linux/Unix server or vice versa.

    ]]>
    The information is passed along to applications via {@link StartContainersResponse#getAllServicesMetaData()} that is returned by {@link ContainerManagementProtocol#startContainers(StartContainersRequest)}

    @return meta-data for this service that should be made available to applications.]]>
    The method used by the NodeManager log aggregation service to initial the policy object with parameters specified by the application or the cluster-wide setting.

    @param parameters parameters with scheme defined by the policy class.]]>
    The method used by the NodeManager log aggregation service to ask the policy object if a given container's logs should be aggregated.

    @param logContext ContainerLogContext @return Whether or not the container's logs should be aggregated.]]>
    The method used by administrators to ask SCM to run cleaner task right away

    @param request request SharedCacheManager to run a cleaner task @return SharedCacheManager returns an empty response on success and throws an exception on rejecting the request @throws YarnException @throws IOException]]>
    The protocol between administrators and the SharedCacheManager

    ]]>