Send m-search packets on all network interfaces - python-2.7

I am implementing a code through which i have to get devices connected to all network interfaces on my machine.
For this, i am first getting the ip of all network interfaces and then sending m-search command on them.
After 2.5 seconds port is stopped to listen.
But it is giving me some assertion error.
Code:
class Base(DatagramProtocol):
""" Class to send M-SEARCH message to devices in network and receive datagram
packets from them
"""
SSDP_ADDR = "239.255.255.250"
SSDP_PORT = 1900
MS = "M-SEARCH * HTTP/1.1\r\nHOST: {}:{}\r\nMAN: 'ssdp:discover'\r\nMX: 2\r\nST: ssdp:all\r\n\r\n".format(SSDP_ADDR, SSDP_PORT)
def sendMsearch(self):
""" Sending M-SEARCH message
"""
ports = []
for address in self.addresses:
ports.append(reactor.listenUDP(0, self, interface=address))
for port in ports:
for num in range(4):
port.write(Base.MS, (Base.SSDP_ADDR,Base.SSDP_PORT))
reactor.callLater(2.5, self.stopMsearch, port) # MX + a wait margin
def stopMsearch(self, port):
""" Stop listening on port
"""
port.stopListening()
Error:
Traceback (most recent call last):
File "work\find_devices.py", line 56, in sendMsearch
ports.append(reactor.listenUDP(0, self, interface=address))
File "C:\Python27\lib\site-packages\twisted\internet\posixbase.py", line 374, in listenUDP
p.startListening()
File "C:\Python27\lib\site-packages\twisted\internet\udp.py", line 172, in startListening
self._connectToProtocol()
File "C:\Python27\lib\site-packages\twisted\internet\udp.py", line 210, in _connectToProtocol
self.protocol.makeConnection(self)
File "C:\Python27\lib\site-packages\twisted\internet\protocol.py", line 709, in makeConnection
assert self.transport == None
AssertionError
Please tell what's wrong in this code and how to correct this.
Also on linux machines, if no device is found on network then it doesn't go to stopMsearch() why ?

A protocol can only have one transport. The loop:
for address in self.addresses:
ports.append(reactor.listenUDP(0, self, interface=address))
tries to create multiple UDP transports and associate them all with self - a single protocol instance.
This is what the assertion error is telling you. The protocol's transport must be None (ie, it must not have a transport). But on the second iteration through the loop, it already has a transport.
Try using multiple protocol instances instead.

Related

Is there a way to reach a SFTP server from a serverless service (or local) with a tunnel machine in between?

I'm building a serverless service (AWS lambda) to get/put files in SFTP server on a daily basis.
The only way to access the SFTP server is from another machine (an AWS EC2 instance) which has it's IP whitelisted at the SFTP server.
I did try to use Python SSHTunnelForwarder and paramiko libraries, but I'm not sure if it's possible bind SSH (tunnel forwarding) and SFTP in a single solution.
Script
with SSHTunnelForwarder(
(ssh_instance_ip, 22),
ssh_username=ssh_user,
ssh_private_key='<key_path>',
remote_bind_address=('0.0.0.0', 22),
local_bind_address=('127.0.0.1', 10022),
) as tunnel:
tunnel.start()
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(password=sftp_pwd, username=sftp_user, hostname=sftp_host, port='<port_value>', allow_agent=True, disabled_algorithms=dict(pubkeys=["rsa-sha2-512", "rsa-sha2-256"]))
tr = client.get_transport()
tr.default_max_packet_size = 100000000
tr.default_window_size = 100000000
sftp = client.open_sftp()
sftp.listdir('/')
sftp.close
client.close()
Error:
...
DEBUG:paramiko.transport:Sending global request "keepalive#lag.net"
DEBUG:paramiko.transport:[chan 0] EOF sent (0)
DEBUG:paramiko.transport:EOF in transport thread
Traceback (most recent call last):
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.10/3.10.9/lib/python3.10/site-packages/paramiko/sftp_client.py", line 852, in _read_response
t, data = self._read_packet()
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.10/3.10.9/lib/python3.10/site-packages/paramiko/sftp.py", line 201, in _read_packet
x = self._read_all(4)
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.10/3.10.9/lib/python3.10/site-packages/paramiko/sftp.py", line 188, in _read_all
raise EOFError()
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/work/scratch/tunnel_test/em_tunnel/app/tunnel_test.py", line 71, in <module>
sftp.listdir('/')
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.10/3.10.9/lib/python3.10/site-packages/paramiko/sftp_client.py", line 218, in listdir
return [f.filename for f in self.listdir_attr(path)]
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.10/3.10.9/lib/python3.10/site-packages/paramiko/sftp_client.py", line 239, in listdir_attr
t, msg = self._request(CMD_OPENDIR, path)
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.10/3.10.9/lib/python3.10/site-packages/paramiko/sftp_client.py", line 822, in _request
return self._read_response(num)
File "/home/linuxbrew/.linuxbrew/Cellar/python#3.10/3.10.9/lib/python3.10/site-packages/paramiko/sftp_client.py", line 854, in _read_response
raise SSHException("Server connection dropped: {}".format(e))
paramiko.ssh_exception.SSHException: Server connection dropped:
I did look for several similar topics and could not find a way to bind SSH and SFTP in a single solution to use in serverless service as AWS Lambda.
Could you help me trying either debugging or suggesting ways (maybe a bash or JavaScript) to bind SSH and SFTP, please?
additional information:
EC2 access is done via user#host and a RSA PRIVATE KEY
SFTP access is done via user#host and a password
I've found a solution using ssh2 lib from JavaScript.
The following script access the SFTP server with a user and password jumping over another machine (an ec2 instance in my case) accessible using an user and a RSA KEY.
The script will list the content of the requested path from the SFTP server and write it's result into my.json file.
Script adapted from those examples ssh2 - npm.
const fs = require('fs');
const path = require('path');
const Client = require('ssh2').Client;
// SSH tunnel details
const sshTunnelIp = "ssh_host";
const sshUser = "ssh_user";
const sshKeyPath = path.resolve(__dirname, "path_to_your_key");
const sshKey = fs.readFileSync(sshKeyPath);
// SFTP server details
const sftpServerIp = "sftp_host";
const sftpUser = "sftp_user";
const sftpPass = "sftp_password";
// Create an SSH client
const sshClient = new Client();
// Connect to the SSH tunnel (EC2 instance)
sshClient.connect({
host: sshTunnelIp,
username: sshUser,
privateKey: sshKey
});
sshClient.on('ready', function() {
// Create an SFTP client and connect to the SFTP server
sshClient.sftp(function(err, sftp) {
if (err) throw err;
// List the content of a given path from the SFTP server
sftp.readdir('path_to_sftp_dir', function(err, list) {
if (err) throw err;
// Write the result to a json file
fs.writeFile(
'./my.json',
JSON.stringify(list),
function (err) {
if (err){
console.error('Something went wrong');
}
}
)
console.log("listing directory", list);
sftp.end();
sshClient.end();
});
});
});

Celery unexpectedly closes TCP connection

I'm using RabbitMQ 3.8.2 with Erlang 22.2.7 and having a problem while consuming tasks. My configuration is django-celery-rabbitmq. While publishing messages in a queue everything goes ok until the length of the queue reaches 1200 messages. After this point RabbitMQ starts to close AMQP connection with following errors:
...
2022-11-01 09:35:25.327 [info] <0.20608.9> accepting AMQP connection <0.20608.9> (185.121.83.107:60447 -> 185.121.83.116:5672)
2022-11-01 09:35:25.483 [info] <0.20608.9> connection <0.20608.9> (185.121.83.107:60447 -> 185.121.83.116:5672): user 'rabbit_admin' authenticated and granted access to vhost '/'
...
2022-11-01 09:36:59.129 [warning] <0.19994.9> closing AMQP connection <0.19994.9> (185.121.83.108:36149 -> 185.121.83.116:5672, vhost: '/', user: 'rabbit_admin'):
client unexpectedly closed TCP connection
...
[error] <0.11162.9> closing AMQP connection <0.11162.9> (185.121.83.108:57631 -> 185.121.83.116:5672):
{writer,send_failed,{error,enotconn}}
...
2022-11-01 09:35:48.256 [error] <0.20201.9> closing AMQP connection <0.20201.9> (185.121.83.108:50058 -> 185.121.83.116:5672):
{inet_error,enotconn}
...
Then the django-celery consumer disappears from queue list, messages become "ready" and celery pods are unable to ack the message after the job is finished with the following error:
ERROR: [2022-11-01 09:20:23] /usr/src/app/project/celery.py:114 handle_message Error while handling Rabbit task: [Errno 104] Connection reset by peer
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/amqp/connection.py", line 514, in channel
return self.channels[channel_id]
KeyError: None
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/app/project/celery.py", line 76, in handle_message
message.ack()
File "/usr/local/lib/python3.10/site-packages/kombu/message.py", line 125, in ack
self.channel.basic_ack(self.delivery_tag, multiple=multiple)
File "/usr/local/lib/python3.10/site-packages/amqp/channel.py", line 1407, in basic_ack
return self.send_method(
File "/usr/local/lib/python3.10/site-packages/amqp/abstract_channel.py", line 70, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content)
File "/usr/local/lib/python3.10/site-packages/amqp/method_framing.py", line 186, in write_frame
write(buffer_store.view[:offset])
File "/usr/local/lib/python3.10/site-packages/amqp/transport.py", line 347, in write
self._write(s)
ConnectionResetError: [Errno 104] Connection reset by peer
I have noticed that the message size also affects this behavior. In the above case there are like 1000-1500 symbols in each message. If I decrease it to 50 symbols, then the threshold at which RabbitMQ starts to close AMQP connection shifts to 4000-5000 messages.
I suspect that the problem is with lack of resources for RabbitMQ, but I don't know how find what exactly is going wrong. If I run htop on the server, I see that 2 available CPU are not at high load at any time (loaded less than 20% each) and RAM is 400mb / 3840mb used. So nothing seems to be wrong. Is there any resource checking command for RabbitMQ? The tasks do not take long time to complete, about 10 seconds each.
Also maybe there are some missing heartbeats from the client (I had the problem earlier, but not now, there are currently no error messages about that).
Also if I run sudo journalctl --system | grep rabbitmq, I get the following output:
......
Мау 24 05:15:49 oms-git.omsystem sshd[809111]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.154.63.169 user=rabbitmq
Мау 24 05:15:51 oms-git.omsystem sshd[809111]: Failed password for rabbitmq from 43.154.63.169 port 37010 ssh2
Мау 24 05:15:51 oms-git.omsystem sshd[809111]: Disconnected from authenticating user rabbitmq 43.154.63.169 port 37010 [preauth]
Мау 24 16:12:32 oms-git.omsystem sudo[842182]: ad : TTY=pts/3 ; PWD=/var/log/rabbitmq ; USER=root ; COMMAND=/usr/bin/tail -f -n 1000 rabbit#XXX-git.log
......
Maybe here is another issue with firewall, but I don't see any error messages about that in /var/log/rabbitmq/rabbit#XXX.log.
My Celery configuration on client is like:
CELERY_TASK_IGNORE_RESULT = True
CELERY_RESULT_BACKEND = 'django-db'
CELERY_CACHE_BACKEND = 'django-cache'
CELERY_SEND_EVENTS = False
CELERY_BROKER_POOL_LIMIT = 30
CELERY_BROKER_HEARTBEAT = 30
CELERY_BROKER_CONNECTION_TIMEOUT = 600
CELERY_PREFETCH_MULTIPLIER = 1
CELERY_SEND_EVENTS = False
CELERY_WORKER_CONCURRENCY = 1
CELERY_TASK_ACKS_LATE = True
Currently I'm running the pod using following command:
celery -A project.celery worker -l info -f /var/log/celery/celery.log -Ofair
Also I have tried to use various arguments to limit prefetch or turn off heartbit but it didn't work:
celery -A project.celery worker -l info -f /var/log/celery/celery.log --without-heartbeat --without-gossip --without-mingle
celery -A project.celery worker -l info -f /var/log/celery/celery.log --prefetch-multiplier=1 --pool=solo --
I expect that there are no limitations on queue length and every celery pod in my kubernetes cluster consumes and acks messages without errors.

Salt masters behind ELB have flaky connection to minions

I am running the following setup at AWS:
Elastic Loadbalancer in front of two EC2 machines (Amazon Linux) with a docker container that the salt-master runs in
Two EC2 instances with salt-minions installed
The 'master' value in the minion config is set to the dns of the loadbalancer (SaltMaster-env-vpc-test.szfegmankg.us-east-1.elasticbeanstalk.com)
The ELB accepts all traffic from the minions
The Salt-masters accept all traffic from the ELB as well as from the minions
The Salt-masters PKI Folder is shared between the two masters
The Salt-masters have the same private+public keys
The Salt-masters run on 2017.7.1
The Salt-minions run on 2016.11.5 (I tried it with 2017.7.1, but got the same results)
The Salt-minions accept all traffic from the ELB as well as from the masters
The master config looks as follows:
open_mode: True
worker_threads: 20
auto_accept: True
log_level: error
log_level_logfile: debug
extension_modules: srv/salt/ext
rest_cherrypy:
port: 8000
disable_ssl: True
debug: True
external_auth:
pam:
saltdev:
- .*
- '#runner'
# Setting the job_cache to redis.
# The redis config settings are generated at the start of the docker container and
# will be written into /etc/salt/master.d/redis.conf
master_job_cache: redis
cache: redis
pki_dir: /etc/salt/pki/master/efs
The minion config looks as follows:
id: WIN-AB3GO7BJ72I
log_file: C:\salt.log
multiprocessing: False
log_level_logfile: debug
pki_dir: /conf/pki/minion
master: SaltMaster-env-vpc-test.szfegmankg.us-east-1.elasticbeanstalk.com
master_type: str
master_alive_interval: 30
open_mode: True
root_dir: c:\salt
ipc_mode: tcp
recon_default: 1000
recon_max: 199000
recon_randomize: True
In the master log files, I can see on both masters:
2017-09-05 10:06:18,118 [salt.utils.verify][DEBUG ][35] This salt-master instance has accepted 2 minion keys.
A salt-key -L on both masters yield the same result:
Accepted Keys:
WIN-AB3GO7BJ72I
WIN-EDMP9VB716B
Denied Keys:
Unaccepted Keys:
Rejected Keys:
So it looks like all is fine and everything should work. However, a test.ping is extremely flaky. Sometimes it works, but most of the time it doesnt.
Most of the time neither master gets any return from the minion and on the minion side I can see in the log that the minion never receives the message to execute 'test.ping' from the master.
Example 1:
test.ping from Master1:
root#d7383ff8f8bf:/# salt 'WIN-EDMP9VB716B' test.ping
[ERROR ] Exception raised when processing __virtual__ function for salt.loaded.int.cache.consul. Module will not be loaded: 'module' object has no attribute 'Consul'
[ERROR ] An un-handled exception was caught by salt's global exception handler:
KeyError: 'redis.ls'
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 476, in salt_main
client.run()
File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 173, in run
for full_ret in cmd_func(**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 805, in cmd_cli
**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1597, in get_cli_event_returns
connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 577, in connected_ids
search = self.cache.ls('minions')
File "/usr/lib/python2.7/dist-packages/salt/cache/__init__.py", line 244, in ls
return self.modules[fun](bank, **self._kwargs)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1113, in __getitem__
func = super(LazyLoader, self).__getitem__(item)
File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 101, in __getitem__
raise KeyError(key)
KeyError: 'redis.ls'
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 476, in salt_main
client.run()
File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 173, in run
for full_ret in cmd_func(**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 805, in cmd_cli
**kwargs):
File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1597, in get_cli_event_returns
connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 577, in connected_ids
search = self.cache.ls('minions')
File "/usr/lib/python2.7/dist-packages/salt/cache/__init__.py", line 244, in ls
return self.modules[fun](bank, **self._kwargs)
File "/usr/lib/python2.7/dist-packages/salt/loader.py", line 1113, in __getitem__
func = super(LazyLoader, self).__getitem__(item)
File "/usr/lib/python2.7/dist-packages/salt/utils/lazy.py", line 101, in __getitem__
raise KeyError(key)
KeyError: 'redis.ls'
I am aware that the redis error will be fixed soon https://github.com/saltstack/salt/issues/43295
Example 2:
test.ping from Master1, ~ 1 Minute after Example 1:
root#d7383ff8f8bf:/# salt 'WIN-EDMP9VB716B' test.ping
WIN-EDMP9VB716B:
True
Also during my tests, a test.ping from Master2 never succeeded.
I would like to know if there is some flaw in my setup that I am not seeing, or if Salt only works with an HA Proxy as an ELB?
Or maybe Salt doesn't work at all behind an ELB?
See https://github.com/saltstack/salt/issues/43368 for more answers.
TL;DR
Because there is no session stickyness for TCP connections, it is currently not possible to work with a saltmaster that is behind an ELB, if you use the ELB's ip/name as an entrypoint.

ESP8266 NodeMCU Lua "Socket client" to "Python Server" connection not possible

I was trying to connect a NodeMCU Socket client program to a Python server program, but I was not able to establish a connection.
I tested a simple Python client server code and it worked well.
Python Server Code
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.bind((host, port)) # Bind to the port
s.listen(5) # Now wait for client connection.
while True:
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
print c.recv(1024)
c.send('Thank you for connecting')
c.close() # Close the connection
Python client code (with this I tested the above code)
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 12345 # Reserve a port for your service.
s.connect((host, port))
s.send('Hi i am aslam')
print s.recv(1024)
s.close # Close the socket when done
The output server side was
Got connection from ('192.168.99.1', 65385)
Hi i am aslam
NodeMCU code
--set wifi as station
print("Setting up WIFI...")
wifi.setmode(wifi.STATION)
--modify according your wireless router settings
wifi.sta.config("xxx", "xxx")
wifi.sta.connect()
function postThingSpeak()
print("hi")
srv = net.createConnection(net.TCP, 0)
srv:on("receive", function(sck, c) print(c) end)
srv:connect(12345, "192.168.0.104")
srv:on("connection", function(sck, c)
print("Wait for connection before sending.")
sck:send("hi how r u")
end)
end
tmr.alarm(1, 1000, 1, function()
if wifi.sta.getip() == nil then
print("Waiting for IP address...")
else
tmr.stop(1)
print("WiFi connection established, IP address: " .. wifi.sta.getip())
print("You have 3 seconds to abort")
print("Waiting...")
tmr.alarm(0, 3000, 0, postThingSpeak)
end
end)
But when I run the NodeMCU there is no response in the Python server.
The Output in the ESPlorer console looks like
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
Waiting for IP address...
WiFi connection established, IP address: 192.168.0.103
You have 3 seconds to abort
Waiting...
hi
Am I doing something wrong or missing some steps here?
Your guidance is appreciated.
After I revisited this for the second time it finally clicked. I must have scanned your Lua code too quickly the first time.
You need to set up all event handlers (srv:on) before you establish the connection. They may not fire otherwise - depending on how quickly the connection is established.
srv = net.createConnection(net.TCP, 0)
srv:on("receive", function(sck, c) print(c) end)
srv:on("connection", function(sck)
print("Wait for connection before sending.")
sck:send("hi how r u")
end)
srv:connect(12345,"192.168.0.104")
The example in our API documentation is wrong but it's already fixed in the dev branch.

How to setup outgoing mail server through outlook office365 in odoo 9

I will try to setup outgoing mail server in odoo 9 .so i fill all the field and test connection and the connection also success , but at the time of send mail it will generate an error .
Field fill like that:-
Name : sendmail
Priority: 10
SMTP Server : smtp.office365.com
SMTP Port:25
Debugging: enable
Connection Security:TLS (STARTTLS)
Username:my yser name
Password:password
But, when we send any mail then it will generate the below error
16-12-06 10:04:28,440 426 INFO test openerp.addons.base.ir.ir_mail_server: Mail delivery failed via SMTP server 'smtp.office365.com'.
SMTPDataError: 550
5.7.60 SMTP; Client does not have permissions to send as this sender
2016-12-06 10:04:28,443 426 ERROR test openerp.addons.mail.models.mail_mail: failed sending mail (id: 136) due to Mail Delivery Failed
Mail delivery failed via SMTP server 'smtp.office365.com'.
SMTPDataError: 550
5.7.60 SMTP; Client does not have permissions to send as this sender
Traceback (most recent call last):
File "/usr/lib/python2.7/dist- packages/openerp/addons/mail/models/mail_mail.py", line 262, in send
res = IrMailServer.send_email(msg, mail_server_id=mail.mail_server_id.id)
File "/usr/lib/python2.7/dist-packages/openerp/api.py", line 248, in wrapper
return new_api(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/openerp/api.py", line 490, in new_api
result = method(self._model, cr, uid, *args, **old_kwargs)
File "/usr/lib/python2.7/dist-packages/openerp/addons/base/ir/ir_mail_server.py", line 483, in send_email
raise MailDeliveryException(_("Mail Delivery Failed"), msg)
MailDeliveryException: (u'Mail Delivery Failed', u"Mail delivery failed via SMTP server 'smtp.office365.com'.\nSMTPDataError: 550\n5.7.60 SMTP; Client does not have permissions to send as this sender")
So i tried too much for this , but i am not getting any solution , if you have any solutin , please share with me .
Use port 587.
The error message tells you that the sender is invalid - you can only send as the mailbox owner (primary SMTP address) or as one of the proxy addresses associated with the mailbox.
Removing all catchall parameters (mail.catchall.domain and mail.catchall.alias) under "Settings" -> "Technical" -> "Parameters" -> "System Parameters" and it work like charm .
WORKS LIKE CHARM :
Removing all catchall parameters (mail.catchall.domain and mail.catchall.alias) under "Settings" -> "Technical" -> "Parameters" -> "System Parameters" and it work like charm . TY Debasish
I'm on Odoo V12
It was not suffisient for my problem, I had to delete the alias domain but there another thing to check :
I initialy created my odoo installation with a GMAIL address, worked a bit but had to switch for a pro e-mail because all my invitations e-mail was blocked by Google Bot beacause it look liked suspicious. It did this only in Odoo v12 because there is more links in the mail.
So I configured my real smtp server in Odoo but get the error 550. Odoo kept in the COMPANY settings the primary gmail address and tried to send on my other smtp server with the gmail name. The other server didn't accepted it so sent me back error 550.
Once i putted my new e-mail address in the company description, and deleted alias domain it worked !!
PS : Don't try to edit ir_mail_server.py to put in bruteforce your e-mail ... Doesn't work ..