I'm trying to setup running google tests on a C++ repository using Github Actions that are running on Windows Latest.
The building process completes, but when it comes to running the tests - it is stuck and doesn't execute the executable, that is generated from the Visual Studio Project via msbuild. Generally the tests should execute in a close to none amount, since there are not a lot of tests. I have also waited and 30 minutes for the action to execute, but no luck.
What I have tried is:
Running the executable straight from the actions.yml file like:
Build/Tests.exe
Build/Tests
./Build/Tests
./Build/Tests.exe
cd Build && Tests.exe
etc.
Some of these commands run and execute locally, but are stuck on execute on the Github Actions Runner.
Now I have switched to a python script running these commands for maybe getting more insight on the process and seems when calling subprocess.Popen, the action is still stuck, but on wait.
Workflow callstack after canceling the workflow:
Run python Tools/test.py
python Tools/test.py
shell: C:\Program Files\PowerShell\7\pwsh.EXE -command ". '{0}'"
env:
CMAKE_VERSION: 3.21.1
NINJA_VERSION: 1.10.2
pythonLocation: C:\hostedtoolcache\windows\Python\3.10.2\x64
^CTraceback (most recent call last):
File "D:\a\Lib\Lib\Tools\test.py", line 48, in <module>
sys.exit(main())
File "D:\a\Lib\Lib\Tools\test.py", line 44, in main
return run(cmd_arguments)
File "D:\a\Lib\Lib\Tools\test.py", line 39, in run
return process.communicate()[0]
File "C:\hostedtoolcache\windows\Python\3.10.2\x64\lib\subprocess.py", line 1141, in communicate
self.wait()
File "C:\hostedtoolcache\windows\Python\3.10.2\x64\lib\subprocess.py", line 1204, in wait
return self._wait(timeout=timeout)
File "C:\hostedtoolcache\windows\Python\3.10.2\x64\lib\subprocess.py", line 1485, in _wait
result = _winapi.WaitForSingleObject(self._handle,
KeyboardInterrupt
Related
I am hoping to use Fbprophet on my cloud function in a Python 3.7 environment, but it fails to build and gives me the following error.
Build failed: `pip_download_wheels` had stderr output:
ERROR: Command errored out with exit status 1:
command: /opt/python3.7/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py'"'"'; __file__='"'"'/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3_5khs54
cwd: /tmp/pip-wheel-srnqu7b5/fbprophet/
Complete output (40 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/fbprophet
creating build/lib/fbprophet/stan_model
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py", line 148, in <module>
"""
File "/opt/python3.7/lib/python3.7/site-packages/setuptools/__init__.py", line 140, in setup
return distutils.core.setup(**attrs)
File "/opt/python3.7/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/python3.7/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 202, in run
self.run_command('build')
File "/opt/python3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/python3.7/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/opt/python3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py", line 48, in run
build_models(target_dir)
File "/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py", line 36, in build_models
from fbprophet.models import StanBackendEnum
File "/tmp/pip-wheel-srnqu7b5/fbprophet/fbprophet/__init__.py", line 8, in <module>
from fbprophet.forecaster import Prophet
File "/tmp/pip-wheel-srnqu7b5/fbprophet/fbprophet/forecaster.py", line 14, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'
----------------------------------------
ERROR: Failed building wheel for fbprophet
ERROR: Failed to build one or more wheels
error: `pip_download_wheels` returned code: 1; Error ID: 618AA8E7
This is what my requirements.txt file looks like:
cython
pystan
numpy
pandas==1.0.3
google-cloud-storage==1.29.0
fbprophet
geopy==1.22.0
google-cloud-bigquery==1.25.0
Everything works perfectly fine locally in a Python 3.7 virtual environment on jupyter notebook. Would appreciate any help because I've spent almost an entire day trying to fix this but to no avail.
I'm having similar issues to this. My goal is to deploy a function that, when passed some input, will feed this into a Prophet model in order to make a prediction before passing the prediction to another part of my system.
As far as I can tell, there are a few things that make this complicated.
First, there is the issue of build dependencies that #mgoya mentioned in the comment above. This manifests both when installing Prophet and when installing Pystan (a dependency of Prophet). In my cloudbuild.yaml I'm attempting to circumvent this by installing the dependencies in sequence, like this:
steps:
- name: 'docker.io/library/python:3.9'
entrypoint: /bin/sh
# Run pip install and pytest in the same build step
# (pip packages won't be preserved in future steps!)
args: [-c, 'pip uninstall pystan; pip install convertdate==2.1.2 lunarcalendar==0.0.9 holidays==0.10.3 pgen tqdm cython pandas numpy setuptools; pip install pystan==2.19.1.1; pip install -r requirements.txt']
timeout: 1200s
This first uninstalls Pystan (if one exists locally), then installs the build dependencies for Pystan (a build dependency for Prophet), then installs Prophet (which is included in my requirements.txt).
Note the semicolons. I ran into issues when using double-ampersands instead (i.e. pip install ... && pip install ... etc.. I believe this may be due to the way that the installed packages make their way to the local file system and are made available to the following commands. By running them with && this seems to try and run them all at once, which won't give the earlier parts of the installation time to propagate and be discoverable by later steps. Using semicolons seems to help with this but it's anecdotal evidence at best.
You could also try using python -m pip install ... rather than pip install though I'm not familiar enough with Python to tell you the difference off the top of my head.
Secondly, there are the memory requirements for installing Pystan. I've read somewhere that Pystan requires 4GB of memory to install. The default machine_type for cloudbuild does not have that much. You can try increasing the size of the machine by changing its type.
Finally, there's the resources allocated for function execution. Again, I read somewhere that Prophet / Pystan require 2GB of memory just to run one of the models. So, if your cloud function doesn't have this, it may run into memory issues when trying to execute. In my experience so far, memory issues are not that transparent within Google Cloud.
--
My current thinking (and perhaps what I'd recommend to anyone else reading this post) is to consider whether Cloud Functions is the right tool for this, given the weight of the dependencies. Pystan and Prophet are rather special-case dependencies, given their build and runtime resource requirements.
What I've opted to do is build (locally) a container with these dependencies baked in and pushing this to Google's Container Registry. My plan from here is to use that container as the base image for a Cloud Run application, which is significantly easier to deploy. This has obvious drawbacks but if my base image changes infrequently (which it will) I think this approach would be fine. Unfortunately, this model ("bring your own container") is not supported by Cloud Functions - it's designed where all you bring is code.
I have one Github project, I cloned it locally using pycharm IDE and installed all the packages to run the project successfully. When I try to build the solution, getting the exception as host alias is not in environment.
below is the exception detail.
`Unhandled exception in thread started by <function fn at 0x7fa22ef08e60>
Traceback (most recent call last):
File "./runner.py", line 20, in fn
subprocess.call(command.split(" "), cwd = directory)
File "/usr/lib/python2.7/subprocess.py", line 523, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
BUILD_NUMBER: 1390
FATAL ERROR: FEED_HOST_ALIAS not in environment
BUILD_NUMBER: 1390
FATAL ERROR: FEED_HOST_ALIAS not in environment
BUILD_NUMBER: 1390
FATAL ERROR: FEED_HOST_ALIAS not in environment`
OS version: Ubuntu 16.04 LTS
Python version: 2.7.10
Let's sort things out.
this is problem running the application. You don't "build" python applications
this seems to be a multithreaded application. Therefore there may be output from different threads.
in line 20 of your file ./runner.py you call subprocess.call().
that call internally calls some other things (Popen() and _execute_child()), but the final action is to raise an exception telling you No such file or directory.
since the subproces.call() call ends at that point, it is likely that the other output is from another thread. You could search your solution for the string 'FEED_HOST_ALIAS', but my guess is that this message is not directly related to the problem.
The problem is, as subprocess.call() reports, that the file you are asking to run does not exist. So set a breakpoint in runner.py at line 20 and have a look at what is passed as parameters. Then you should find out what happens. PyCharm lets you step into calls, so you might even want to check what happens inside subprocess.call() - or place a breakpoint somewhere within subprocess.py.
I have a celery task that is calling a django management command using call_command and that management command uses the python logging framework to create a TimedRotatingFileHandler.
On a windows test station, I am getting the following stack trace which seems to show that the celery tasks are still holding the log file handle open after they have completed:
Traceback (most recent call last):
File "C:\Python27\lib\logging\handlers.py", line 77, in emit
self.doRollover()
File "C:\Python27\lib\logging\handlers.py", line 350, in doRollover
os.rename(self.baseFilename, dfn)
WindowsError: [Error 32] The process cannot access the file because it is being used by another process
Logged from file myCommand.py, line XX
Is this a known issue and if so is there a method for getting around it? (I have tried googling but could not find anything relevant).
I'm trying to perform testing with coverage on a django application within pycharm itself. I'm using django-nose / nose as the test runner and I have these settings in test_settings.py
# nose settings
TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
NOSE_ARGS = [
'--with-coverage',
'--cover-package=myapp',
]
This all works very nicely if I go to the command line (or the builtin pycharm terminal) activate my virtualenv and run... django-admin.py test --settings=myapp.tests.test_settings... I get the output I expect.
However when I try to use my test run config inside Pycharm, I no longer get the coverage output, all I get is:
Testing started at 16:37 ...
There is no such manage file manage
nosetests --with-coverage --cover-package=dynamicbanners --verbosity=1
Creating test database for alias 'default'...
..
Process finished with exit code 0
(I don't have a manage.py as it is a standalone app, hence the manage file message I assume).
The tests still "pass" but there is no output and no .coverage file created. I am running on windows temporarily, is it possible that windows is denying access to writing the .coverage file and thus nothing is being displayed? Or is Pycharm eating the output but failing to display it?
I would use the builtin pycharm "Run with coverage" option, but it seems to have a problem with
NOSE_ARGS = [
'--with-coverage',
'--cover-package=myapp',
]
because when I run it with those options it spits out an error which I believe is due to two instances of coverage being initialised
Testing started at 16:41 ...
There is no such manage file manage
nosetests --with-coverage --cover-package=myapp --verbosity=1
Creating test database for alias 'default'...
Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 3.0.1\helpers\run_coverage.py", line 34, in <module>
main()
File "C:\Users\ptinkler\venvs\myapp_venv\lib\site-packages\coverage\cmdline.py", line 720, in main
status = CoverageScript().command_line(argv)
File "C:\Users\ptinkler\venvs\myapp_venv\lib\site-packages\coverage\cmdline.py", line 438, in command_line
self.do_execute(options, args)
File "C:\Users\ptinkler\venvs\myapp_venv\lib\site-packages\coverage\cmdline.py", line 580, in do_execute
self.coverage.stop()
File "C:\Users\ptinkler\venvs\myapp_venv\lib\site-packages\coverage\control.py", line 410, in stop
self.collector.stop()
File "C:\Users\ptinkler\venvs\myapp_venv\lib\site-packages\coverage\collector.py", line 294, in stop
assert self._collectors[-1] is self
AssertionError
Process finished with exit code 1
This is despite me having unchecked "use bundled coverage.py" in settings -> coverage
Would appreciate any assistance, I'd rather not have to create a local settings file to get rid of those NOSE_ARGS just to get coverage displaying within pycharm. If I can only end up with standard coverage output in the pycharm testrunner console that's not the end of the world. Otherwise I guess I'll have to stick to commandline.
Ti.SDK 1.6.2
iOS 4.3
I recently got a new system and built it off of a time machine backup of my old system.
When I try to compile and run any app in the simulator I'm getting the following error returned to the console.
[INFO] One moment, building ...
Traceback (most recent call last):
File "/Library/Application Support/Titanium/mobilesdk/osx/1.6.2/iphone/builder.py", line 1342, in <module>
main(sys.argv)
File "/Library/Application Support/Titanium/mobilesdk/osx/1.6.2/iphone/builder.py", line 505, in main
link_version = check_iphone_sdk(iphone_version)
File "/Library/Application Support/Titanium/mobilesdk/osx/1.6.2/iphone/builder.py", line 48, in check_iphone_sdk
output = run.run(["xcodebuild","-showsdks"],True,False)
File "/Library/Application Support/Titanium/mobilesdk/osx/1.6.2/iphone/run.py", line 7, in run
proc = subprocess.Popen(args, stderr=subprocess.STDOUT, stdout=subprocess.PIPE)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 595, in __init__
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 1106, in _execute_child
OSError: [Errno 2] No such file or directory
It worked fine yesterday on the old system. I ran a backup before transferring and then installed the new system off of the most recent backup. Not sure why it would work on the old system and not the new.
I ended up reinstalling xcode as the error referenced python. This solved the problem.
try deleting everything in build/iphone. Do NOT delete the build/iphone directory, just everything IN it. Restart Ti Studio and rebuild.