Training
Fox World Travel Concur Support Desk
Fox World Travel is available to support your Concur questions. Contact support via phone at 608-710-4172 or via chat in the Concur application.
Forgot Password?
Submit a request to Concur for a password reset. Your username is your current institutional email address.
New User Registration
Concur registration for new users. Registration is limited to employees. The use of a university e-mail address is required for user name.
Additional Support
Contact your institution’s travel manager for additional support questions.
Personalized SAP Concur Open Updates
Personalized up-to-the-minute service availability and performance information
-
OPI-6002076 : US2 | EU2 | APJ1 : Expense | Travel | Invoice | Request | Imaging | Analysis/Intelligence : Intermediate Root Cause Analysis
6 May 2025 | 6:51 am
Impact as Reported: In the EU2, US2, and APJ1 Data Centers, logging in through the SAP Concur website and SAP Concur mobile app was unavailable. The affected users would get “Sorry, something went wrong” upon trying to access SAP Concur services. Root cause: An outage within the service responsible for the scanning and validation of incoming traffic origin, necessary to prevent access from embargoed countries, has caused all the inbound traffic to be blocked. The reason behind the failure and the lack of an efficient failover mechanism are still under investigation. Corrective Actions: N/A
-
OPI-5997407 : US2 : Expense | Travel | Invoice | Request | Imaging | Analysis/Intelligence : Root Cause Analysis
6 May 2025 | 3:03 am
Impact as Reported: In the US2 Data Center, logins through the SAP Concur website and SAP Concur mobile app were below expected level. Affected users may have experienced intermittent errors when attempting to log in. Root cause: As part of a standard maintenance operation users were being moved from a container cluster providing gateway services to a new cluster. The service team performing the operation failed to realize that the new cluster was not scaling up capacity quickly enough to support the incoming load, resulting in latency and timeouts. The incident response team moved all users back to the original cluster, restoring service to normal levels. The migration operation was then completed after manually scaling the capacity of the new cluster to match the old one. Corrective Actions: Add a step to the deployment process to manually scale the secondary cluster to the same as production instead of relying on auto-scaling.