Gitlab account acces error: "422 The change you requested was rejected." - cookies

This question asked by coderss but restarting the computer seems to noneffective.
422
The change you requested was rejected.
Make sure you have access to the thing you tried to change.
Please contact your GitLab administrator if you think this is a mistake.
I have above error in Firefox under Linux but I have access in Chromium.
That's looks like typical cookie problem.
I tried clear all Gitlab related cookies then restarted computer without any new sign in attempt. and restarted computer :) yeah I just try
But still same error, same browser.
How can I handle this problem?
This error also occurs at forgot password section and in private tab of Firefox.
Is there another Gitlab related cookie?

The issue should be fixed not only with cookies as discribed, but also with a correction of time system.
I faced exactly the same problem: unable to connect with Firefox, even with a reset of cookies, but I was able to connect with Chrome. (That sounds strange because my clock system was false even on Chrome.)
The solution came with this very short explanation:
"it's was because my local time zone wasn't set up properly (and was messing with cookies)"
Source: https://www.reddit.com/r/gitlab/comments/cv7pov/422_error_on_wwwgitlabcomuserssignin_and/ey7l7lz?utm_source=share&utm_medium=web2x&context=3

This was followed by issue 35447 and issue 40898.
The last one included:
Ok, I suspect the issue here for many people is that the GitLab session cookie is set to Secure here: https://gitlab.com/gitlab-org/gitlab-ce/blob/9c491bc628f5a72424b82bb01e2457150bf2e71c/config/initializers/session_store.rb#L25
Setting the right SSL headers fixes the problem.
If, for some reason, the connection doesn't appear to be an HTTPS connection, Rails won't send a cookie, and the client won't be able to login. You may be able to confirm this by checking the response headers in the GET /users/sign_in endpoint: if you see a _gitlab_session cookie being sent the first time you load the page, then things are working properly.
And:
JuKu
JuKu #JuKu · 1 year ago
Solution for HaProxy:
Add these line to your frontend: reqadd X-Forwarded-Proto:\ http
After this change, it worked for me.
See also: https://www.digitalocean.com/community/tutorials/how-to-implement-ssl-termination-with-haproxy-on-ubuntu-14-04
That would avoid the dreaded:
But it depends on the type of GitLab used (gitlab.com or an on-premise GitLab, and the type of Web server used)
For example, issue 53085 refers to issue 54493:
The group had internal availability, while one of it's projects was public (not the one I was having so much trouble with, which was private).
Making the group public solved the problem.
The OP maxemilian reports in the comments it is working now with Firefox on Manjaro:
I checked my updates diary, but only zoom matches between Firefox access time successfully.
I pretty sure this was related to GitLab login code. Suspicious dates (Jan 6- Jan 21 and Feb 3- Feb 6).
I think This update done by GitLab the dates between Feb 3- Feb 6.

In my case, server time was late and I had to change the time, then restart the server and reconfigure the gitlab.
Change server time
sudo timedatectl set-time "06:24:00"
sudo timedatectl set-time "2020-04-23"
sudo hwclock --systohc
Reconfigure Gitlab.
sudo gitlab-ctl reconfigure

Empty Cache and Hard Reload on chrome will do the trick

In my case I was trying to fetch changes using a Git command and also got this error. It turned out that I was using the wrong URL. The .git suffix was missing. Curiously it worked the first time.

For me it was the VPN. If you are connected to a VPN set to a different timezone, turn it off, clear the cookies and you should be able to connect.

Related

AWS Workspaces client on Ubuntu give display error

Im trying to connect to my Amazon Workspace using the Workspace client on Ubuntu. I keep getting this. I can connect no problem from my windows machine at work, but keep getting Display Error at home on my linux machine. .
I had same trouble but solved the problem. Please try following process.
Uninstalled AWS.
Re-start PC and installed AWS again.
Wait for couple of hours. while you waiting, try to keep pushing Refresh button or re-start AWS again and again untill the windows shows log in screen.Because it may take some time to clear old registration information and refresh it in the host side according my IT dept says.
Anyway, I've sent the log files to the AWS help desk and are waiting for their reply.
good luck!
Here we can not assume that it is not working because of so and so reasons, instead we should exactly get to know the root cause for this error. So in order to get the RCA, check the log files present in this directory:
C:\Users\yourname\AppData\Local\Amazon Web Services\Amazon WorkSpaces\logs
So, for me, it was showing the AWS URL access was forbidden.

How do I get past ColdFusion server-specific error code 2?

I had installed ColdFusion 2018 recently and with the installation less than a month old (and my understanding of the technology even less), my Cold Fusion service has stopped working. I have tried a number of things and have referred to a number of articles and out of many such errors where the service is not being accessible, some of them were able to get it resolved. However, some other obscure reason that may be causing this error have been untouched and unknown.
Whenever, I try to restart the service, I get an error as shown below:
Windows could not start the ColdFusion 8 Application Server on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to server-specific code error 2.”
Without much understanding, I started to google it out. Looking into every one of these posts, I tried
Configure JRE and try to relaunch the service by looking at "JAVA_HOME" variable and JVM.config
Run the batch files in every possible combination to find if anything clicks
Check if the present JAVA version works and is compatible with Coldfusion version installed
Fiddling with the "SessionStorage" var in neo-runtime.xml file as some suggested
and few other tricks coupled with a numerous service restart attempts and a few machine reboots as well.
A service that renders Cold Fusion pages should be shut down abruptly. To add to agony, the CF Admin also depends on the service and hence does not work.
Any pointers to any potential solutions?

GCS appears to be blocking my IP

I have been testing out a ubuntu instance on GCS for the last couple weeks and a possible home for one of our web servers. Last week suddenly everything stopped working. I was not able to SSH to shell, and I couldn't even visit the site anymore through my browser. I logged into the dashboard and nothing seemed wrong. I had several other colleges try to go to the site and it loaded without any issues. I could not find any settings in the dashboard that would suggest some kind of block like this, so i assumed I must have triggered some kind of anti spam system. I decided to give a few days before trying to mess with it any further. after 6 days of not messing with it at all I still can not visit the site, or login via SSH.
Then to verify they are blocking my IP address and that it wasn't just something wrong with my machine. I switched my IP and then everything started behaving as expected once again. I can get to the site in my browser and can once again SSH into the VM. After switching back to my previous static IP everything went back to not letting me view the webpage, or ssh into the server.
My problem is that this isn't a permanent solution for me. I have many servers that only allow login from my previous IP address so I'd rather fix the issue with this VM rather then change all those system to allow from a new IP address. Any help on finding the solution would be greatly appreciated.
Please let me know if I can provide any additional info to help find the problem.
followup info:
The way our network is set up the IP we get from DHCP is the real world IP our device is seen with (I think we own a block or something)
this is the first time i've done anything with a GCS VM
Edit: added additional information

Strange apache lag in requests

I have an Apache2 and Django (mod_wsgi) setup that provides a RESTful API. I have a set of automated tests for this, that executes ~1000 API requests (pure http GET/POST/PUT/DELETE) in sequential order.
The problem is, for every 80 requests or so, I get a strange lag/timeout for exactly 5s or 10s. See timestamp examples here:
Request 1: 2013-08-30T03:49:20.915
Response 1: 2013-08-30T03:49:30.940
Request 2: 2013-08-30T03:50:32.559
Response 2: 2013-08-30T03:50:37.597
I can't figure out why this happens. I have an apache config with KeepAlive Off (recommended setup setting for Django) but otherwise standard install for Ubuntu 12.04 LTS.
I'm running the tests from the same server where the webserver is, I first thought this was some kind of DNS cache thing, but I've added the hostname I'm requesting to /etc/hosts but the problem persists.
The system is idle and have lots of cpu and mem when this lag/timeouts happens.
The lag is not specific to a certain request (URL), it seems kinda random.
Considering that it's always exactly to the millisecond 5s or 10s, it feels like this is some specific setting somewhere causing this.
In case it provides some insight, watch my talk from PyCon US.
http://lanyrd.com/2013/pycon/scdyzk/
The talk deals with things like process churn and startup costs. One thing you shouldn't do is set maximum requests if you don't really need it.
Also consider trying New Relic to help diagnose where the issue is. That will save a lot of guessing if it is a web application of backend service infrastructure issue.
As far as seeing how such monitoring can help, watch another one of my PyCon talks.
http://lanyrd.com/2012/pycon/spcdg/
This was a DNS issue, adding the domainname I used locally to /etc/hosts actually solved the problem. I just hadn't reboot the server for the changes to take effect, thought restarting networking would take care of that, but apparently not.

VMware Server 2.0 - The VMware Infrastructure Web Service not responding

After installing VMware Server I get the following error when I try to access the VMware web-based server manager:
The VMware Infrastructure Web Service
at "http://localhost:8222/sdk" is not
responding
Go into the services manager and check that the 'VMware Host Agent' service is running. If not, then start it and then try browsing to the site again.
Vmware Hostd was not working for me either.
However, in trying to start the service it stopped automatically. Typically when this happens it is because there is an error in your config.xml.
C:\ProgramData\VMware\VMware Server\hostd\config.xml
In my case, checking the logs at:
C:\ProgramData\VMware\VMware Server
showed it erroring out after "Trying hostsvc".
Searching the config.xml for hostsvc showed references to several things, the first thing was the datastore. In checking my datastores.xml file:
C:\ProgramData\VMware\VMware Server\hostd\datastores.xml .
I found it full of all sorts of random characters instead of a properly formed XML document.
Renaming datastores.xml to datastorex.xml.bad allowed me to start the service. At which point I had to add back my datastores through the GUI.
Hopefully this will help someone else out. I did not find any other references in Google to this issue.
Try accessing via "http://localhost:8222" without the /sdk. You can also try the secure site via "https://localhost:8333".