How to show only reviews of current logged in user in Opencart? - opencart

I need a little help regarding OpenCart
I want to show reviews only to logged In customers [I have succeeded in that]
But now I want to show review only to user who have wrote it.
Example:
If User A logged in and wrote a Review Only User A will see it not any other users
If User B logged in and wrote a Review Only user B will see his review, not user A or any other logged in user.
Why I Want This:
I want because if user comes back after few days or months he can see his review, I don't want to show all reviews to all users only to user who have added it

In the model ModelCatalogReview there are 2 functions you need to change: getReviewsByProductId and getTotalReviewsByProductId.
getReviewsByProductId:
$query = $this->db->query("SELECT r.review_id, r.author, r.rating, r.text, p.product_id, pd.name, p.price, p.image, r.date_added FROM " . DB_PREFIX . "review r LEFT JOIN " . DB_PREFIX . "product p ON (r.product_id = p.product_id) LEFT JOIN " . DB_PREFIX . "product_description pd ON (p.product_id = pd.product_id) WHERE p.product_id = '" . (int)$product_id . "' AND p.date_available <= NOW() AND p.status = '1' AND r.status = '1' AND pd.language_id = '" . (int)$this->config->get('config_language_id') . "' AND r.customer_id = '" . (int)$this->customer->getId() . "' ORDER BY r.date_added DESC LIMIT " . (int)$start . "," . (int)$limit);
(notice AND r.customer_id = '" . (int)$this->customer->getId() . "')
getTotalReviewsByProductId:
$query = $this->db->query("SELECT COUNT(*) AS total FROM " . DB_PREFIX . "review r LEFT JOIN " . DB_PREFIX . "product p ON (r.product_id = p.product_id) LEFT JOIN " . DB_PREFIX . "product_description pd ON (p.product_id = pd.product_id) WHERE p.product_id = '" . (int)$product_id . "' AND p.date_available <= NOW() AND p.status = '1' AND r.status = '1' AND pd.language_id = '" . (int)$this->config->get('config_language_id') . "' AND r.customer_id = '" . (int)$this->customer->getId() . "'");
(notice AND r.customer_id = '" . (int)$this->customer->getId() . "')
This also lists the comments to logged in user only, because there are no comments with the customer_id = '0'.
Later Edit:
This applies to Opencart 1.5.5.1. If you are using a different version, you might need to change the code a little.

Related

Power BI: Renaming many tables and columns using the keyboard

Is there a way to use the keyboard to rename tables and columns in Power BI. I have hundreds (or thousands) of columns and tables whose names need to be more human-readable than what is in the database. Using right-click | rename is very slow. Tabbing to the column and hitting F2 doesn't appear to work. What is the keystroke to enter rename mode?
Or... Is there a way to open a .pbix file in a text editor so I can do the work there? (Certainly Microsoft must have chosen some open, standard, portable format for the file -- like XML? ;) ) I have unzipped the file, but the DataModel file appears to be a binary and not an archive.
Based on user12439754's answer...
(The "ease of use" for this task within Power BI is horrible.)
Since I'm using SQL Server, I was able to write a script that does much of the work.
Issues/future enhancements:
Parameterize the schema (or search for all of them).
Remove the comma at the end of the #"Renamed Columns" definition.
Usage:
Run the script.
Remove the comma at the end of #"Renamed Columns".
Move column names to #"Removed Columns" as needed.
Change the names
to what you want the users to see. Paste the result (one table at a
time) into the advanced editor.
declare #q table (
id int identity(1,1) not null,
tbl varchar(128) not null,
col varchar(128) not null
)
insert #q
select o.name as 'Table'
, c.name as 'Column'
from sys.sysobjects o
inner join sys.syscolumns c on c.id = o.id
inner join sys.schemas s on s.schema_id = o.uid
where s.name = 'dbo'
order by o.name
, c.colorder
declare #tbl varchar(128), #t varchar(128), #c varchar(128)
select #tbl = (select top 1 tbl from #q order by id)
declare #i int, #max int
set #i = 1
select #max = count(*) from #q
declare #out table(
id int identity(1,1) not null,
a varchar(4000) not null
)
while #i <= #max
begin
select #t = (select tbl from #q where id = #i)
insert #out
values ('let')
, (' Source = Sql.Database("FinancialDM", "FinancialDataMart"),')
, (' dbo_' + #t + ' = Source{[Schema="dbo",Item="' + #t + '"]}[Data],')
, (' #"Removed Columns" = Table.RemoveColumns(dbo_' + #t + ',{}),')
, (' #"Renamed Columns" = Table.RenameColumns(#"Removed Columns",{')
while #tbl = #t and #i <= #max
begin
select #c = ' {"' + col + '", "' + col + '"}, ' from #q where id = #i
insert #out
values (#c)
set #i = #i + 1
select #t = (select tbl from #q where id = #i)
end
insert #out
values (' })')
, ('in')
, (' #"Renamed Columns"')
, ('')
, ('')
, ('')
set #tbl = #t
end
select *
from #out
Sorry in advance if this only partially solves your problem-
One possible way to speed up the renaming of multiple columns within a table would be:
You will still need to change the table names manually - but this may speed up the column renaming especially if you have lots of tables with 20+ columns
Open Power Query Editor
Navigate to the table you wish to change the columns in (Rename it while your at it)
Reorder a column
Take the column names from the reorder columns step in the advanced editor
Manipulate in Excel or a text editor of your choice (this may require some work the first time but you can create something that will generate the necessary output)
Insert your string of changed column names in format: {"Column Original", "Column Changed"} to a newly inserted step #"Renamed Columns" = Table.RenameColumns(#"Last Step",{{"Column Original", "Column Changed"},{"Column Original", "Column Changed"},{"Column Original", "Column Changed"}}),

Running Google BigQuery sample network analysis script, I get a syntax error: Was expecting: <EOF>

I am trying to use sample code for Google BigQuery. The query is in legacy SQL.
I have this set in BigQuery query:
SELECT
a.name,
b.name,
COUNT(*) AS count
FROM (FLATTEN(
SELECT
GKGRECORDID,
UNIQUE(REGEXP_REPLACE(SPLIT(V2Persons,';'), r',.*', "))
name
FROM [gdelt-bq:gdeltv2.gkg]
WHERE DATE>20150302000000 and DATE < 20150304000000 and V2Persons like
'%Tsipras%'
,name)) a
JOIN EACH (
SELECT GKGRECORDID, UNIQUE(REGEXP_REPLACE(SPLIT(V2Persons,';'), r',.*', ")) name
FROM
[gdelt-bq:gdeltv2.gkg]
WHERE
DATE>20150302000000
AND DATE < 20150304000000
AND V2Persons LIKE '%Tsipras%')) b
ON
a.GKGRECORDID=b.GKGRECORDID
WHERE
a.name<b.name
GROUP EACH BY
1,
2
ORDER BY
3 DESC
LIMIT
250
But it raises the error:
Error: Encountered " "ON" "ON "" at line 11, column 1. Was expecting:
You have 1 to many ) characters after the join. Specifically '%Tsipras%')) b should likely be '%Tsipras%') b.
Most errors where you see "Was expecting: " after because of mismatched opening and closing pairs with too many closings.
Not sure 100% if this is exactly what you expected - but at least from syntax prospective the fix is below
SELECT a.name, b.name, COUNT(*) AS COUNT
FROM (FLATTEN(
SELECT GKGRECORDID, UNIQUE(REGEXP_REPLACE(SPLIT(V2Persons,';'), r',.*', ''))
name
FROM [gdelt-bq:gdeltv2.gkg]
WHERE DATE>20150302000000 AND DATE < 20150304000000 AND V2Persons LIKE
'%Tsipras%'
,name)) a
JOIN EACH (
SELECT GKGRECORDID, UNIQUE(REGEXP_REPLACE(SPLIT(V2Persons,';'), r',.*', ''))
name
FROM [gdelt-bq:gdeltv2.gkg]
WHERE DATE>20150302000000 AND DATE < 20150304000000 AND V2Persons LIKE
'%Tsipras%') b
ON a.GKGRECORDID=b.GKGRECORDID
WHERE a.name<b.name
GROUP EACH BY 1,2
ORDER BY 3 DESC
LIMIT 250
Fixes are in line 3, 10 and 14
In lines 3 and 10 - I replaced " with ''
In line 14 - I removed extra )
I am not sure with line 14 - as it might be that openning ( is actually missing

pyodbc sql results getting cut off

I have a script as below
cursor = connection.cursor()
select_string = "SELECT * from mytable"
cursor.execute(select_string)
data = cursor.fetchall()
print(data)
print len(data)
Data is as listed below
number day info is_true
82 Monday quick "lazy fox" &amp bear true
12 Tuesday why did 'the frog' cross false
when i print the length of the data the is_true column is not being considered because of the quotations/special characters in the info column. Is there a way where i can select * from a table and disregard any quotations that may end the column processing early?
The string formatting shouldn't be a problem if you use Pandas for reading the table from the SQL connection. This should work:
import pandas as pd
import pyodbc
connection = pyodbc.connect('<SERVER>')
select_string = "SELECT * from mytable"
data = pd.read_sql(select_string , connection)
print(data)
print(data.shape)

Resolving gaps in the data in Stata with Weekly Time Series Data

I have weekly Google Trends Search query data in Stata. Here is a sample of what the data looks like:
I converted the date string into a date object like so:
gen date2 = date(date, "YMD")
gen year= year(date2)
gen w = week(date2)
gen weekly = yw(year,w)
format weekly %tw
I now want to declare "date2" as my time series reference, so I did the following:
tsset date2, weekly
However, upon using tsreport I get the following information
However, I should have no gaps in the data, as it is weekly. For some reason, Stata is still assuming I have daily data.
I cannot take first differences on any of these variables because of this issue. How do I resolve this?
I agree with William Lisowski's general advice but have different specific recommendations.
You have weekly data with daily flags for each week.
Stata weeks are likely to be of little or no use to you for reasons documented in detail in references that
search week, sj
will disclose. Specifically,
SJ-12-4 dm0065_1 . . . . . Stata tip 111: More on working with weeks, erratum
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. J. Cox
Q4/12 SJ 12(4):765 (no commands)
lists previously omitted key reference
SJ-12-3 dm0065 . . . . . . . . . . Stata tip 111: More on working with weeks
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. J. Cox
Q3/12 SJ 12(3):565--569 (no commands)
discusses how to convert data presented in yearly and weekly
form to daily dates and how to aggregate such data to months
or longer intervals
SJ-10-4 dm0052 . . . . . . . . . . . . . . . . Stata tip 68: Week assumptions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. J. Cox
Q4/10 SJ 10(4):682--685 (no commands)
tip on Stata's solution for weeks and on how to set up
your own alternatives given different definitions of the
week
Issuing that search command will give you links to .pdf copies of each paper.
I suggest simply
gen date2 = daily(date, "YMD")
format date2 %td
tsset date2, delta(7)
daily() is the same function as date() but I think the name is a better signal to all of precisely what it does. The more important detail is that delta(7) is sufficient to indicate daily data spaced 7 days apart, which is precisely what you have.
To expand on the problem you had: when you converted to daily dates, then you got a numeric variable with values like 18755 in steps of 7 to your last date. You then told Stata through your tsset ..., weekly that these are really weeks. Stata uses an origin for all dates like these of the beginning of 1960. So, Stata is working out what 18755 weeks (etc.) from the beginning of 1960 would be. And your numeric variable is still in steps of 7. So, the reason that Stata is misinterpreting your data is that you gave it incorrect information. tsset will never change a date variable; it just interprets it as you instruct.
Note also that you created a weekly date variable but then did not use it. That wouldn't have been a good solution either, but it would have been closer to what you want. It appears that all your dates are Sundays, so in some years there would be 53 and in other years 52; that's not true of Stata's own weeks.
The problem would be more helpfully stated if it included a listing of the data, rather than a picture, so that others could test and demonstrate correct code.
With that said, you need to carefully review the output help datetime to improve your understanding of how to work with Stata Internal Format (SIF) date and time data, and of the meaning of a "weekly date" in Stata. I believe that something like the following will start you along the correct path.
gen date2 = date(date, "YMD")
gen weekly = wofd(date2)
format weekly %tw
or in a one fewer steps
gen weekly = wofd(date(date, "YMD"))
format weekly %tw

Shift columns to the right

I have a SAS dataset which looks like this:
Month Col1 Col2 Col3 Col4
200801 11 2 3 20
200802 5 9 4 10
. . . . .
. . . . .
. . . . .
201212 3 34 1 0
I want to create a dataset by shift each row's column Col1-Col4 values, to the right. It will look diagonally shifted.
Month Col1 Col2 Col3 Col4 Col5 Col6 Col7 . . . . . . . Coln
200801 11 2 3 20
200802 . 5 9 4 10
. . . . .
. . . . .
. . . . .
201212 . . . . . . . . . 3 34 1 0
Can someone suggest how I can do it?
Thanks!
First off, if you can avoid doing so, do. This is a pretty sparse way to store data, and will involve large datasets (definitely use OPTIONS COMPRESS at least), and usually can be worked around with good use of CLASS variables.
If you really must do this, PROC TRANSPOSE is your friend. While this is possible in the data step, it's less messy and more flexible in PROC TRANSPOSE.
First, make a totally vertical dataset (month+colname+colvalue):
data pre_t;
set have;
array cols col1-col4;
do _t = 1 to dim(cols);
colname = cats("col",((_N_-1) + _t)); *shifting here, edit this logic as needed;
value = cols[_t];
output;
keep colname value month;
run;
In that datastep, you are creating the eventual column name in colname and setting it up for transpose. If you have data not identical to the above (in particular, if you have data grouped by something else), N may not work and you may need to do some logic (such as figuring out difference from 200801) to calculate the col#.
Then, proc transpose:
proc transpose data=pre_t out=want;
by month;
id colname;
var value;
run;
And voilĂ , you should have what you were looking for. Make sure it's sorted properly in order to get the output in the expected order.