specify Atspi version before import - python-2.7

I use this python library which uses pyatspi (from pyatspi import …). When I run it in (L)Ubuntu 16.04, it throws the following error:
/usr/lib/python2.7/dist-packages/pyatspi/__init__.py:17: PyGIWarning: Atspi was imported without specifying a version first. Use gi.require_version('Atspi', '2.0') before import to ensure that the right version gets loaded.
from gi.repository import Atspi
Although this error message says exactly what I should do to, it doesn't work just to add the line gi.require_version('Atspi', '2.0') in /usr/lib/python2.7/dist-packages/pyatspi/__init__.py (giving NameError: name 'gi' is not defined) – what am I doing wrong?

It's necessary to import require_version from gi first, so just add:
from gi import require_version
require_version('Atspi', '2.0')
before the
from gi.repository import Atspi
line in the file given by the error message, which was /usr/lib/python2.7/dist-packages/pyatspi/__init__.py here.

Related

ImportError: attempted relative import beyond top-level package whilst referring within same app

I am here because google search didn't solve the issue yet.
Django Version 4.0.6
Project structure created in Visual Studio Community 2022 is like as follows,
Project0
|
--Project0
|
---settings.py
--- <>
|
--App1
|
---models.py
---views.py
---forms.py
|
--App2
|
---models.py
---views.py
---forms.py
---tables.py
Project0, App1, App2 are all in same hierarchy.
When trying to run the solution, I am getting the following errors with App2 files,
File "D:\Projects\Django\Project0\App2\urls.py", line 2, in <module>
from . import views
File "D:\Projects\Django\Project0\App2\views.py", line 5, in <module>
from .tables import ProductTable, ProductHTMXtable
File "D:\Projects\Django\Project0\App2\tables.py", line 2, in <module>
from ..App1.models import List
ImportError: attempted relative import beyond top-level package
1st issue:
Here in App2 there are no models created. Instead, importing models from App1's model. So I guess the line --> from ..App1.models import List <-- might be wrong. If I remove those 2 dots '..' then I get Import could not be resolved error and the models are not being referred though its not stopping the solution from running.
2nd issue:
Why do --> from . import views <-- & --> from .tables import <-- too are throwing errors.
Unable to fix theses issues. Please help.
Answering to Balizok: If I do so I am getting not resolved error as attached here.
App1Model could not be resolved error
Import using from App1.models import List instead of from ..App1.models import List and that should do it.
As it is now, you're going, as the error suggests, beyond top-level package by using the ..App1, where refering to the app name doesn't.

ImportError: cannot import name 'load_mnist' from 'pytorchcv'

---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-2cacdf187bba> in <module>
6 import numpy as np
7
----> 8 from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset
9 load_mnist(batch_size=128)
ImportError: cannot import name 'load_mnist' from 'pytorchcv' (/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorchcv/__init__.py)
How can fix this bug?
I use python 3.7 and Jupiter notebook. The code in Pytorch Fundamental of Microsoft: Link https://learn.microsoft.com/en-gb/learn/modules/intro-computer-vision-pytorch/5-convolutional-networks
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np
from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset
load_mnist(batch_size=128)
I installed PyTorch by command: pip install pytorchcv
I assume you might have the wrong pytorchcv package. The one in pypy does not contain load_mnist
Starting from scratch you could download mnist as such:
data_train = torchvision.datasets.MNIST('./data',
download=True,train=True,transform=ToTensor()) data_test = torchvision.datasets.MNIST('./data',
download=True,train=False,transform=ToTensor())
They missed one command before importing pytorchcv.
This pytorchcv is different from pytorchcv in PyPI.
Run this before importing pytorchcv.
!wget https://raw.githubusercontent.com/MicrosoftDocs/pytorchfundamentals/main/computer-vision-pytorch/pytorchcv.py
Then it should work.
please download the prepared py file from the link https://raw.githubusercontent.com/MicrosoftDocs/pytorchfundamentals/main/computer-vision-pytorch/pytorchcv.py and put it into current folder, then VS Code could recognize it

Apache Beam unable to identify global function in script GCP

I am working on a project to create a stream processing prediction engine on GCP. I am mostly learning from this repo here. However when I try to execute the script blogposts/got_sentiment/4_streaming_pipeline/streaming_tweet.py I keep getting error
NameError: name 'estimate' is not defined [while running 'generatedPtransform-129']
My function looks like as follows
from __future__ import absolute_import
import argparse
import datetime
import json
import logging
import numpy as np
import apache_beam as beam
import apache_beam.transforms.window as window
from apache_beam.io.gcp.bigquery import parse_table_schema_from_json
from apache_beam.options.pipeline_options import StandardOptions, GoogleCloudOptions, SetupOptions, PipelineOptions
from apache_beam.transforms.util import BatchElements
from googleapiclient import discovery
def init():
........
def estimate_cmle():
init()
.....
def estimate(instances):
estimate_cmle()
......
def run(argv=None):
....
output = (lines
| 'assign window key' >> beam.WindowInto(window.FixedWindows(10))
| 'batch into n batches' >> BatchElements(min_batch_size=49, max_batch_size=50)
| 'predict sentiment' >> beam.FlatMap(lambda messages: estimate(messages))
)
.....
f __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
This is where beam seems to unable to to recognize the estimate function although I am creating it in the same script.
Edit
Trying with beam.FlatMap(estimate) gave error
name 'estimate_cmle' is not defined [while running 'generatedPtransform-1208']
Look at these 2 parts:
Function definition
def estimate(instances):
......
Function call
beam.FlatMap(lambda messages: estimate(messages,estimate_cmle))
Your call expect a function with 2 parameters, your declared function have only one. Your python script only contain an estimate with 1 parameter and the function with 2 parameters is not defined.
The examples, in the repo, the call contain only 1 parameter and thus it works. Fix this, it should work then

Why can't I import WD_ALIGN_PARAGRAPH from docx.enum.text?

I transferred some code from IDLE 3.5 (64 bits) to pycharm (Python 2.7). Most of the code is still working, for example I can import WD_LINE_SPACING from docx.enum.text, but for some reason I can't import WD_ALIGN_PARAGRAPH.
At first, nearly non of the imports worked, but after I did
pip install python-docx
instead of
pip install docx
most of the imports worked except for WD_ALIGN_PARAGRAPH.
# works
from __future__ import print_function
import xlrd
import xlwt
import os
import subprocess
from calendar import monthrange
import datetime
from docx import Document
from datetime import datetime
from datetime import date
from docx.enum.text import WD_LINE_SPACING
from docx.shared import Pt
# does not work
from docx.enum.text import WD_ALIGN_PARAGRAPH
I don't get any error messages but Pycharm marks the line as error:
"Cannot find reference 'WD_ALIGN_PARAGRAPH' in 'text.py'".
You can use this instead:
from docx.enum.text import WD_PARAGRAPH_ALIGNMENT
and then substitute WD_PARAGRAPH_ALIGNMENT wherever WD_ALIGN_PARAGRAPH would have appeared before.
The reason this is happening is that the actual enum object is named WD_PARAGRAPH_ALIGNMENT, and a decorator is applied that also allows it to be referenced as WD_ALIGN_PARAGRAPH (which is a little shorter, and possibly clearer). I expect the syntax checker in PyCharm is operating on direct module attributes and doesn't pick up the alias, which is resolved by the Python parser/compiler.
Interestingly, I expect your code would work fine either way. But to get rid of the annoying message you can use the base name.
If someone uses pylint it can be easily suppressed with # pylint: disable=E0611 added at the end of the import line.

ImportError: No module named stanford_segmenter

The StanfordSegmenter does not have an interface in nltk, different from the case of StanfordPOStagger or StanfordNER. So to use it, basically I have to create an interface manually for StanfordSegmenter, namely stanford_segmenter.py under ../nltk/tokenize/. I follow the instructions here http://textminingonline.com/tag/chinese-word-segmenter
However, when I tried to run this from nltk.tokenize.stanford_segmenter import stanford_segmenter, I got an error
msg Traceback (most recent call last):
File "C:\Users\qubo\Desktop\stanfordparserexp.py", line 48, in <module>
from nltk.tokenize.stanford_segmenter import stanford_segmenter
ImportError: No module named stanford_segmenter
[Finished in 0.6s]
The instructions mentioned to reinstall nltk after creating the stanford_segmenter.py. I don't quite get the point but so I did. However, the process can hardly be called 'reinstall', but rather a detaching and reconnecting nltk to python libs.
I'm using Windows 64 and Python 2.7.11. NLTK and all relevant pkgs are updated to the latest version. Wonder if you guys can shed some light on this. Thank you all so much.
I was able to import the module by running the following code:
import imp
yourmodule = imp.load_source("module_name.py", "/path/to/module_name.py")
yourclass = yourmodule.TheClass()
yourclass is an instance of the class and TheClass is the name of the class you want to create the obj in. This is similar to the use of:
from pkg_name.module_name import TheClass
So in the case of StanfordSegmenter, the complete lines of code is as follows:
# -*- coding: utf-8 -*-
import imp
import os
ini_path = 'D:/jars/stanford-segmenter-2015-04-20/'
os.environ['STANFORD_SEGMENTER'] = ini_path + 'stanford-segmenter-3.5.2.jar'
stanford_segmenter = imp.load_source("stanford_segmenter", "C:/Users/qubo/Miniconda2/pkgs/nltk-3.1-py27_0/Lib/site-packages/nltk/tokenize/stanford_segmenter.py")
seg = stanford_segmenter.StanfordSegmenter(path_to_model='D:/jars/stanford-segmenter-2015-04-20/data/pku.gz', path_to_jar='D:/jars/stanford-segmenter-2015-04-20/stanford-segmenter-3.5.2.jar', path_to_dict='D:/jars/stanford-segmenter-2015-04-20/data/dict-chris6.ser.gz', path_to_sihan_corpora_dict='D:/jars/stanford-segmenter-2015-04-20/data')
sent = '我有一只小毛驴我从来也不骑。'
text = seg.segment(sent.decode('utf-8'))