I am trying to set table_cache option, however, I cannot find table_cache in RDS parameters. Where can I change this option?
Thank you.
The table_cache system variable was deprecated and renamed table_open_cache back in MySQL 5.1, and was still called table_open_cache in 5.6.
It's in the RDS parameter group.
However, it's very rare that this value is an appropriate value to tweak. It has long been known to scale negatively -- the more "optimum" your configuration, the worse the server will perform.
If you're using a tuning script, the odds are extremely high that you're operating on bad advice if changing that value has been recommended. Tuning scripts in general are notorious for their well-intentioned, but ill-conceived, bad advice.
Related
I never had this issue until recently, but now, when creating a VM, the option to add a gpu is always not clickable.
this is what it looks like.what is the cause for this?
this is not caused because there are no gpus in my region, I checked a lot of them. also I don't think its an issue with my account, I CAN make gpu instanced through the marketplace
It is probably because of the current machine type that has been set. You can only attach GPUs to general-purpose N1 machine types. GPUs are not supported for other machine types. Feel free to check this documentation for reference.
While changing the language of an item, it is taking around 4-6 seconds for the choosen language to get reflected, that too for admin users. For non-admin, it is taking more than 8seconds which is causing a major performance issue. Kindly let me know how this issue can be resolved or atleast performance enhanced.
It would be worth you looking at the CMS Performance Tuning Guide and the CMS Diagnostics Guide they are for 7.x but I do not believe there are version 8.x equivalents at the moment and most of the advice will still be valid.
Also check the Fragmentation level of indexes on your master database and have a SQL maintenance plan in place to rebuild these as often as your environment requires it can really help performance.
Check the Sitecore Logs for errors during the slow periods and monitor all servers in your environment to see where the performance bottle neck is, CPU, memory etc..
I am using IBM ilog jrules 7.1 trial for doing a POC.I am using decision tables to check customer registration data.
my ilog decision table rule is -- If a customer's state is any of CA,IL,AL then set status as 'eligible' else make the customer as 'ineligible' for the offer.
In a happy path , I can add the state codes as domain literals and the rule will work fine.
But I need to load this domain values dynamically from a database ( mysql ) using some IRL code. Has anyone done a similar requirement like mine , It would be very helpful if someone can point me in the right direction.
One of the general principles of JRules is, that you should call the rules engine with all the necessary information if possible. From a performance perspective, accessing the database during rule execution isn't a good idea. You might also lose the ability to use your rule app in a clustered environment. Also, decisions are less traceable and reproducible because it's harder to know what's in your database at any given moment.
Depending on how often your data changes, I suggest you add these values as a second input parameter and retrieve the data before you call the rules engine. The second possibility is to use the dynamic domain plugin to load those values from the database prior to deployment. But you would have to redeploy the ruleApp every time the data changes. With the dynamic domain plugin you can specify a data provider (e.g. Excel, MySQL etc.) and populate your BOM with the attributes contained in the database. These dynamic domain values show up as attributes and can be synced from the BOM-view in rule studio as well as from the teamserver:
In WODM (the successor of JRules 7.1) this functionality is build in, it's possible that this plugin is not part of the demo and has to be added to 7.1 individually.
My company is at the very end of development process of some (awesome:)) web application. This app will be available as a online service for (hopefully) some significant number of users. This is our biggest django release so far and as we are preparing to release some question about deployment have to be answered.
Q1: how to determine required server parameters for predicted X number of users/Y hits per minute or other factor?
Q2: what hosting solution (shared/vps/dedicated) is worth considering?
Q3: what optimizations can be done at a first place?
I know that this is very subjective and dependent of size of a site, code quality and other factors but I'm very interested in your experiences with django (and not only django) deployment. Any hints, links, advices are kindly welcome. Thanks in advance.
What hosting solution you want to have depends also on how much you want to take of your server yourself (from upgrades etc to backup...), and you should decide if you want to have the responsibility or leave it to someone else.
I think you can only really determine the necessary requirements and bottlenecks in your applications through testing with the estimated load! Try to simulate as many requests.... as you expect - think about caching (where memcached is the best option you have)! If you try to cache things one great tool is the django debug toolbar (http://github.com/robhudson/django-debug-toolbar) which can show you also much about how many database hits you have (dont take the times it shows for that for granted, but analyse them and keep an eye on the number of hits) and eg. how many templates are rendered....
If your system grows, you can first of all think about serving your static media files from another location! Coming to the web server I have made great experiences using lighttpd instead of the fat apache, but I guess you have to evaluate that for yourself!
Also take in consideration what database backend to use, in shared envionments there's in most cases a bigger load on the mysql than on the postgres servers, but also evaluate what works best for you!
You can get some guesses here, but to get a halfway reasonable performance estimate you have to measure the performance of your application yourself. You should then be able to roughly extrapolate the performance on different hardware.
Most of the time the bottleneck is the database, you should get enough RAM to keep it in memory if possible.
"Web application" can encompass so many different things, we can really do no more than guess here.
As for optimization, if it fits to your needs implement some caching (e.g. with memchached), that can give you huge speed improvements.
I am currently working on a C++/COM project using ArcEngine(From ESRI). Aside from the fact that there is little to no support in terms of documentation (SDK is there.) Anyways, i am wondering if anyone here has had any experience in making the initialization process of ArcEngine faster. Right now it takes 30-35 seconds just to initialize the engine. Now we are going to be running several of these applications. Does anyone have any experience, with this?
Its a very werid and odd task, but ESRI's developer forums are no help. and i couldnt find anything on google.
Any ideas?
It's been almost a decade since I last played with ESRI stuff, so I can't help you with anything specific to ArcEngine.
Maybe you can pool instances? In the best case scenario you would be able to reuse ArcEngine instances, and could return an instance back to pool after you're done with it.
If that's not possible, you could at least try to have a number of instances ready to roll, although whether that is possible and/or useful depends a lot on the specifics of your app.
Is it really COM? In that case, the ArcEngine will be exposing a set of COM interfaces. COM interfaces are not magic, and not uniquely bound to one program. In fact, COM has explicit support for proxying. This is e.g. used by DCOM; you get a local proxy for the remote server.
In this case, it should be possible to write a custom COM proxy that fakes the initialization stuff but forwards everything else. Towards your client, the proxies COM interface is identical except faster. Towards ArcEngine, your proxy can wait quite long between calls.
Something that I have found useful with getting ESRI products to start faster (not necessarily ArcEngine, but this probably applies) is to specify the port number (generally 27004) in the registry where the license server is defined.
HKEY_LOCAL_MACHINE\SOFTWARE\ESRI\License\LICENSE_SERVER
HKEY_LOCAL_MACHINE\SOFTWARE\ESRI\ArcInfo\Workstation\8.0\LICENSE_SERVER
When you set this in installation or through the desktop administrator, it is generally something like: #yourserver.name
Change this to 27004#yourserver.name
Again this may not solve your issue, but if you're not doing it, it's worth a try. I've found it to speed things up in our environment, both using a license manager on a network and with a hardware dongle on the local machine.
Well from my understanding ArcEngine initialization, initializes a special COM environment.
You don't ever get any sort of real handle over the initialized environment. Can you somehow store a COM Enviroment and pass it to other programs. My current idea is:
Windows Service Running in Background with initialized ArcEngine. Program somehow queries the service, the service returns the COM Enviroment. Is this even possible?
I had a lot of grief with ESRI forums providing very little help. It feels like Arc* developers are largely on their own.
Using ArcEngine + .Net the initialization time for an application has been trivial (maybe 1 second?) in our environment -- are you using a slow remote server or is this JUST the engine with no network or maps being loaded?
Whenever I've had to deal with large data sets, ESRI has a pig though.
Good to see some discussion on SO of ESRI products! Not a lot here yet...
Exactly what line is taking 45 seconds? If I had to do some psychic debugging, I would guess that you are running into a problem with your license server.
Check that first.