How to convert 35 classes of cityscapes dataset to 19 classes? - computer-vision

The following is a small snippet of my code. Using this, I can train my model called 'lolnet' on cityscapes dataset. But the dataset contains 35 classes/labels [0-34].
imports ***
trainloader = torch.utils.data.DataLoader(
datasets.Cityscapes('/media/farshid/DataStore/temp/cityscapes/', split='train', mode='fine',
target_type='semantic', target_transform =trans,
transform=input_transform ), batch_size = batch_size, num_workers = 2)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net = lolNet()
criterion = CrossEntropyLoss2d()
net.to(device)
num_of_classes = 34
for epoch in range(int(0), 200000):
lr = 0.0001
for batch, data in enumerate(trainloader, 0):
inputs, labels = data
labels = labels.long()
inputs, labels = inputs.to(device), labels.to(device)
labels = labels.view([-1, ])
optimizer = optim.Adam(net.parameters(), lr=lr)
optimizer.zero_grad()
outputs = net(inputs)
outputs = outputs.view(-1, num_of_class)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
outputs = outputs.to('cpu')
outputs = outputs.data.numpy()
outputs = outputs.reshape([-1, num_of_class])
mask = np.zeros([outputs.shape[0]])
#
for i in range(len(outputs)):
mask[i] = np.argmax(outputs[i])
mask = mask.reshape([-1, 1])
IoU = jaccard_score(labels.to('cpu').data, mask, average='micro')
But I want to train my model only on the 19 classes. These 19 classes are found here . The labels to train for are stored as "ignoreInEval" = True. This pytorch Dataloader helper for this dataset doesnt provide any clue.
So my question is how can I train my model on the desired 19 classes of this dataset using pytorch's "datasets.Cityscapes" api.

It's been a time, but leaving an answer as can be useful for others:
Firstly create a mapping to 19 classes + background. Background is related to not so important classes with ignore flag as said here.
# Mapping of ignore categories and valid ones (numbered from 1-19)
mapping_20 = {
0: 0,
1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 1,
8: 2,
9: 0,
10: 0,
11: 3,
12: 4,
13: 5,
14: 0,
15: 0,
16: 0,
17: 6,
18: 0,
19: 7,
20: 8,
21: 9,
22: 10,
23: 11,
24: 12,
25: 13,
26: 14,
27: 15,
28: 16,
29: 0,
30: 0,
31: 17,
32: 18,
33: 19,
-1: 0
}
Then for each label image (the gray images where each pixel contains a class, which has pattern "{city}__{number}_{number}_gtFine_labelIds.png") that you load for training, run function below.
It will convert each pixel according to mapping above and your label images (masks) will have now only 20 (19 classes + 1 background) different values, instead of 35.
def encode_labels(mask):
label_mask = np.zeros_like(mask)
for k in mapping_20:
label_mask[mask == k] = mapping_20[k]
return label_mask
Then you can train your model normally with these new number of classes.

You download the model and the weights.
import torch
import torch.nn as nn
import torchvision.models as models
r = models.resnet50(pretrained=True)
Note that original resent has 1000 categories/classes. So when you download pretrained model that last fc will be for 1000 classes.
Here is the forward() method you have, and above that code is your model.
You can remove the last fc fully connected layer from the original resnet50 model and add your new fc with exactly 19 classes (19 outputs) and you can train the classifier only for that last layer. The other layers, except that last should be frozen.
So you will learn just the 19 classes you need.
Note the resent __init__ method may also take the number of classes so you may try that, but in this case you cannot load the pretrained weights so you need to use pretrained=False and you need to train from scratch.
import torch
import torch.nn as nn
import torchvision.models as models
r = models.resnet50(num_classes=19, pretrained=False)

Related

Pyomo bin sizes

I am new to pyomo. I would like to ask if there is a way to achieved this requirement.
I want my asset to be assigned to 5 different bins. Each bin will have max capacity. for example, y1 has max 50, y2 has max 20,..
some of my assets can only go to certain bin. For example, A can only go to y1, y2. B can go to y4 and y5.
minimise the number of the bin used
Currently my code shown below and the year will all be filled with at least 1 asset . But I would like if only 2 or 3 of the year are used(minimize the number of bins) and would like if assets can be placed from smallest year to highest year
from pyomo.opt import SolverFactory
value_asset = {'J': 2, 'B': 4, 'D': 18, 'C': 34, 'A': 20, 'E': 31}
bins = {'y1': 50, 'y2': 20, 'y3': 30, 'y4': 70, 'y5': 40}
Assets = {'A': ['y1', 'y2'], 'J': ['y1', 'y2'], 'E': ["y4", "y5"], 'B': ["y4", "y5"],
'D': ['y5', "y4", "y3"],
'C': ["y1", "y2", 'y3', 'y4', 'y5']}
model = pyo.ConcreteModel()
model.Assets = pyo.Set(initialize=Assets.keys())
model.budget = pyo.Set(initialize=bins.keys())
model.x = pyo.Var(model.Assets, model.budget, within=pyo.Integers, bounds=(0, None))
model.less_budget = pyo.ConstraintList()
# make sure that all the total are always less than or equal to the budget
for b in model.budget:
model.less_budget.add(expr=sum(model.x[asset, b]*value_asset[asset] for asset in model.Assets) <= bins[b])
# we want to exclude certain year that some assets cannot do
model.excluded = pyo.ConstraintList()
for asset in model.Assets:
inc = Assets[asset]
exc = list(bins.keys() - inc)
for t in exc:
model.excluded.add(expr=model.x[asset, t] == 0)
# each item can only go to 1 bin
model.one_bins = pyo.ConstraintList()
for asset in model.Assets:
model.one_bins.add(expr=sum(model.x[asset, b] for b in (model.budget )) <= 1)
model.obj = pyo.Objective(expr=sum(model.x[asset, b] for asset in model.Assets for b in model.budget),sense=pyo.maximize)
solver = pyo.SolverFactory('cbc', executable=r'C:\Users\cc\Downloads\Cbc-2.10-win64-msvc15-md\bin\cbc.exe')
solver.solve(model)
model.x.display()

Newbye in pyomo : translate a Concrete problem into Abstract

I'm encoutering some issues trying to translate a Contrete Model into Abstract one.
Months = RangeSet(6)
RequiredHours = {1: 8000, 2: 9000, 3: 9800, 4: 9900, 5: 10050, 6: 10500}
Notifications = {1: 2, 2: 0, 3: 2, 4: 0, 5: 1, 6: 0}
Costs = {'Attendants': 5100, 'Trainees': 3600}
Hours = {'Attendants': 150, 'Trainees': 25}
TraineeMax = 5
model = ConcreteModel()
model.TraineesNb = Var(Months,domain=NonNegativeIntegers,bounds=(0,TraineeMax))
AvailableAttendants = {}
AvailableTrainees = {}
for m in Months:
if m == 1:
AvailableAttendants[m] = 62
AvailableTrainees[m] = model.TraineesNb[m]
if m == 2:
AvailableAttendants[m] = AvailableAttendants[m-1] - Notifications[m-1]
AvailableTrainees[m] = model.TraineesNb[m] + model.TraineesNb[m-1]
if m > 2:
AvailableAttendants[m] = AvailableAttendants[m-1] - Notifications[m-1] + model.TraineesNb[m-2]
AvailableTrainees[m] = model.TraineesNb[m] + model.TraineesNb[m-1]
# Objective
model.Costs = Objective(
expr = sum(AvailableAttendants[m] for m in Months)*Costs["Attendants"] +
sum(AvailableTrainees[m] for m in Months)*Costs["Trainees"] ,
sense = minimize)
# declare constraints
model.NeededHours = ConstraintList()
for m in Months:
model.NeededHours.add(expr = AvailableAttendants[m]*Hours["Attendants"] +
AvailableTrainees[m]*Hours["Trainees"] >= RequiredHours[m])
The concrete model is forwarding the adequate resulys (perhaps not the most elegant code, but it works). In Discrete version of this model, an error occurs at the loop [for m in model.Months]
Thanks for your help and comments
Naji

linspace() generates a float values , but int is required

I am generating x-axis values which is time form 1 to 23 and v-values which is a number of clients. I want to join these 2 lists as dictionary which is done but I am getting float values of x as I generate using linspace().
time_values = np.linspace(1,23,23) # print from 1,2...23
number_of_clients = [] # create empty list that will hold number of clients
for i in range(1,24,1):
rand_value = random.randint(1,20) # generate number of clients
number_of_clients.append(rand_value)
data = dict(zip(time_values,number_of_clients))
print data
output is
{1.0: 12, 2.0: 11, 3.0: 3, 4.0: 19, 5.0: 12, 6.0: 12, 7.0: 5, 8.0: 13, 9.0: 15, 10.0: 3, 11.0: 15, 12.0: 20, 13.0: 5, 14.0: 3, 15.0: 18, 16.0: 12, 17.0: 5, 18.0: 6, 19.0: 8, 20.0: 16, 21.0: 19, 22.0: 1, 23.0: 16}
how to convert 1.0 to 1 and so on.I have tried int(time_vlaues), but it did not worked
try astype method to convert numpy float array to int array:
time_values = np.linspace(1,23,23) # print from 1,2...23
number_of_clients = [] # create empty list that will hold number of clients
for i in range(1,24,1):
rand_value = random.randint(1,20) # generate number of clients
number_of_clients.append(rand_value)
data = dict(zip(time_values.astype(int),number_of_clients))
print(data)
or
time_values = np.linspace(1,23,23,dtype='int') # print from 1,2...23
number_of_clients = [] # create empty list that will hold number of clients
for i in range(1,24,1):
rand_value = random.randint(1,20) # generate number of clients
number_of_clients.append(rand_value)
data = dict(zip(time_values,number_of_clients))
print(data)
output:
{1: 17, 2: 6, 3: 8, 4: 3, 5: 12, 6: 11, 7: 18, 8: 1, 9: 8, 10: 1, 11: 17, 12: 2, 13: 5, 14: 6, 15: 1, 16: 8, 17: 19, 18: 2, 19: 13, 20: 15, 21: 16, 22: 17, 23: 14}

Using Gurobi in Python and adding variables

I am trying to write my first Gurobi optimization code and this is where I am stuck with:
I have the following dictionary for my first subscript:
input for k in range(1,11):
i[k] = int(k)
print i
output {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10}
And, I have the following dictionaries for my second subscript:
c_il = {1: 2, 2: 1, 3: 1, 4: 4, 5: 3, 6: 4, 7: 3, 8: 2, 9: 1, 10: 4}
c_iu = {1: 3, 2: 2, 3: 2, 4: 5, 5: 4, 6: 5, 7: 4, 8: 3, 9: 2, 10: 5}
I am trying to create variables as following:
x = m.addVars(i, c_il, vtype=GRB.BINARY, name="x")
x = m.addVars(i, c_iu, vtype=GRB.BINARY, name="x")
Apparently, it is not giving what I am looking for. What I am looking for is x_(i),(c_il) and x_(i),(c_iu); ignore parenthesis.
More clearly, the following is what I am trying to obtain by using dicts i, c_il, and c_iu:
{1: <gurobi.Var x[1,2]>,
2: <gurobi.Var x[2,1]>,
3: <gurobi.Var x[3,1]>,
4: <gurobi.Var x[4,5]>,
5: <gurobi.Var x[5,3]>,
6: <gurobi.Var x[6,4]>,
7: <gurobi.Var x[7,3]>,
8: <gurobi.Var x[8,2]>,
9: <gurobi.Var x[9,1]>,
10: <gurobi.Var x[10,4]>,
11: <gurobi.Var x[1,3]>,
12: <gurobi.Var x[2,2]>,
13: <gurobi.Var x[3,2]>,
14: <gurobi.Var x[4,5]>,
15: <gurobi.Var x[5,4]>,
16: <gurobi.Var x[6,5]>,
17: <gurobi.Var x[7,4]>,
18: <gurobi.Var x[8,3]>,
19: <gurobi.Var x[9,2]>,
20: <gurobi.Var x[10,5]>}
Since I am using dictionaries everywhere, I want to keep it consistent by continuing to use dictionaries so that I can do multiplications and additions with my parameters which are all in dictionaries. Is there any way to create these variables with m.addVars or m.addVar?
Thanks!
Edit: Modified to make it more clear.
It looks like you want to create 10 variables that are indexed by something. The best way to do this is to create the two indexes as lists. If you want x[12], x[21], then write:
from gurobipy import *
m = Model()
il = [ 12, 21, 31, 44, 53, 64, 73, 82, 91, 104 ]
x = m.addVars(il, vtype=GRB.BINARY, name="x")
And if you want to write x[1,2], x[2,1], then write:
from gurobipy import *
m = Model()
il = [ (1,2), (2,1), (3,1), (4,4), (5,3), (6,4), (7,3), (8,2), (9,1), (10,4) ]
x = m.addVars(il, vtype=GRB.BINARY, name="x")
After a few years of experience, I can easily write the below as an answer. Since the past myself was concerned with keeping the dictionaries as is (I highly criticize and question...), a quick solution is as follows.
x = {}
for (i,j) in c_il.items():
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%s"%str([i,j]))
for (i,j) in c_iu.items():
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%s"%str([i,j]))
Alternatively,
x = {(i,j): m.addVar(vtype=GRB.BINARY, name="x%s"%str([i,j]))
for (i,j) in c_il.items()}
for (i,j) in c_iu.items():
x[i,j] = m.addVar(vtype=GRB.BINARY, name="x%s"%str([i,j]))
One liner alternative:
x = {(i,j): m.addVar(vtype=GRB.BINARY, name="x%s"%str([i,j]))
for (i,j) in [(k,l) for (k,l) in c_il.items()] + [(k,l) for (k,l) in c_iu.items()]}

python networkX: Making graph from tuples and assigning different colour for nodes

new = (('AXIN', 37, REPORTED),
('LGR', 34, REPORTED),
('NKD', 29, REPORTED),
('TNFRSF', 23, REPORTED),
('APCDD', 18, REPORTED),
('TOX', 15, UNREPORTED),
('LEF', 14, REPORTED),
('PLCB', 13, REPORTED),
('MME', 13, UNREPORTED),
('NOTUM', 13,UN REPORTED),
('GNG', 11, , REPORTED),
('LOXL', 10, UNREPORTED))
import matplotlib.pyplot as plt
import networkx as nx
children = sorted(new, key=lambda x: x[1])
parent = children.pop()[0]
G = nx.Graph()
for child, weight in children: G.add_edge(parent, child, weight=weight)
width = list(nx.get_edge_attributes(G, 'weight').values())
plt.savefig("plt.gene-expression.pdf")
plt.figure(figsize = (20, 10))
nx.draw_networkx(G, font_size=10, node_size=2000, alpha=0.6) #width=width is very fat lines
plt.savefig("gene-expression-graph.pdf")
In this nx graph, how can I make the UNREPORTED - green color, REPORTED-yellow color?
Parent node is the node with the largest number i.e., AXIN, 37
colors = []
for i in new:
if i[2] == 'UNREPORTED':
colors.append('green')
elif i[2] == 'REPORTED':
colors.append('yellow')
nx.draw_networkx(G, font_size=10, node_size=2000, alpha=0.6, node_color=colors)
The mismatch in ordering comes from the dictionaries that underlie networkx's graph representation. If you ensure that the list of colors is ordered the same way you will have the right color for the right node.
I've written two different approaches here that achieve what I think you want.
Note: I declared values for reported and unreported, rather than turning the third piece of every tuple into a string. But this part isn't essential
# Delcare the graph:
REPORTED = 1
UNREPORTED = 2
new = (('AXIN', 37, REPORTED),
('LGR', 34, REPORTED),
<...>
('LOXL', 10, UNREPORTED))
# 2 axes to show different approaches
plt.figure(1); plt.clf()
fig, ax = plt.subplots(1, 2, num=1, sharex=True, sharey=True)
### option 1: draw components step-by-step
# positions for drawing of all components in right place
pos = nx.spring_layout(G)
# identify which nodes are reported/unreported
nl_r = [name for (name, w, state) in new if state == REPORTED]
nl_u = [name for (name, w, state) in new if state == UNREPORTED]
# draw each subset of nodes in relevant color
nx.draw_networkx_nodes(G, pos=pos, nodelist=nl_r, node_color='g', nodesize=2000, ax=ax[0])
nx.draw_networkx_nodes(G, pos=pos, nodelist=nl_u, node_color='y', nodesize=2000, ax=ax[0])
# also need to draw the egdes
nx.draw_networkx_edges(G, pos=pos, ax=ax[0])
nx.draw_networkx_labels(G, pos=pos, ax=ax[0], font_size=10)
### option 2: more complex color list construction (but simpler plot command)
nl, cl = zip(*[(name, 'g') if state == REPORTED else (name, 'y') for (name, w, state) in new])
nx.draw_networkx(G, pos=pos, nodelist=nl, node_color=cl, nodesize=2000, ax=ax[1], font_size=10)
plt.show()