SharePoint 2013 Management Shell; Variable is Not recognized and therefore the account attached to the variable cannot be found - sharepoint-2013

I am developing a SharePoint2013 instance in Hyper-V using Windows2012R2. Currently I am struggling to attach credentials to a variable in SharePoint Management Shell and then attach that variable to a New SharePoint account.
After I run the following script(s) I received this error:
Script(s):
"Variable" = Get-Credential "Domain\Username"
New-SPManagedAccount -Credential "Variable"
Error:
New-SPManagedAccount -Credential "Variable" : Some or all identity references could not be translated.
Essentially the new SharePoint account cannot be created and attached to the variable because the account cannot be found. However the first script worked, so I am lost as to where or how the Variable stored the "Domain\Username".

You could try this.
$cred = Get-Credential
New-SPManagedAccount -Credential $cred
For more information, you could read this article.
https://learn.microsoft.com/en-us/powershell/module/sharepoint-server/new-spmanagedaccount?view=sharepoint-server-ps#examples

Related

How to Upload PBIX file stored in Azure storage container to Power BI Service

My requirement is to upload the PBIX file stored in Azure storage container to Power BI Service without downloading it to local drive as I have to use the PowerShell script in Runbook Automation
Normally we can upload the PBIX file by giving local path like below
$pbixFilePath = "C:\PBIXFileLocation\Test.pbix"
$import = New-PowerBIReport -Path $pbixFilePath -Workspace $workspace -ConflictAction CreateOrOverwrite
$import | Select-Object *
But now which path I have to use if the PBIX file is stored in Azure storage container and how the PowerShell script can be created? Is it possible?
Tried to list the blobs in the container with the Get-AzStorageBlob cmdlet and passed it as a path in above script and ended up with this error:
If possible please help me with a sample PowerShell script to achieve the above requirement
Thanks in Advance!
Issue can be resolved by following my similar post in Azure platform
AnuragSingh-MSFT is a gem explained me clearly and resolved the issue
A basic understanding of Azure Automation runbook execution should help clarify this doubt. When runbooks are designed to authenticate and run against resources in Azure, they run in an Azure sandbox. Azure Automation assigns a worker to run each job during runbook execution in the sandbox. Please see this link for more details - Runbook execution environment These sandboxes are isolated environment with access to only some of the location/path/directories.
The following section should help answer the question - ... which path I have to use if the PBIX file is stored in Azure storage container and how the PowerShell script can be created?
The script snippet provided by Manu above would download the blob content in the same directory inside sandbox from where script is running. You can access this path inside the script using "." --> for example, if the blob that you are downloading is named testBlob, it will be available in location .\testBlob. ("." stands for current directory).
Therefore, the pbixFilePath can be initialized as $pbixFilePath = ".\Test.pbix"
Another option is to use $env:temp as mentioned in the question. It is one of the environments variable available on local machine (on your workstation) which generally resolves to C:\Users<username>\AppData\Local\Temp
In Azure Automation sandbox environment, this variable resolves to C:\Users\Client\Temp
Therefore, you could download the blob content using the following line:
Get-AzStorageBlobContent -Blob $blob -Container $ContainerName -Context $Ctx -Destination $env:temp #Destination parameter sets the target folder. By default it is local directory (.)
In this case, you would initialize pbixFilePath as $pbixFilePath = $env:temp+"\Test.pbix"
Either case is fine as long as the Automation limits are not exceeded.

java.io.IOException: The Application Default Credentials are not available

I am fairly new to GCP API functions.
I am currently trying to the use text-to-speech module following these steps: https://cloud.google.com/text-to-speech/docs/libraries
I did not set up the environmental variable since I used the authExplicit(String jsonPath) for its authentication: https://cloud.google.com/docs/authentication/production
my code looks like following;
public void main() throws Exception {
String jsonPath = "/User/xxx/xxxx/xxxxxx/xxxx.json";
authExplicit(jsonPath);
//calling the text-to-speech function form the above link.
text2speech("some text");
}
authExplicit(jsonPath) goes through without any problem and prints a bucket. I thought the credential key in JSON was checked. However, text2speech function returns the error as follows:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I want to get the text2speech function work by call Google Cloud API functions.
Please let me know how to solve this issue.
Your advice would be highly appreciated.
It's confusing.
Application Default Credentials (ADC) is a process that looks for the credentials in various places including the env var GOOGLE_APPLICATION_CREDNTIALS.
If GOOGLE_APPLICATION_CREDNTIALS is unset and the code is running on a Google Cloud Platform (GCP) Compute Engine (GCE) service (e.g. Compute Engine), then it use the Metadata service to determine the credentials. If not, ADC fails and raises an error.
Your code fails because, authExplicit does not use ADC but loads the Service Account key from the file and creates a Storage account client using these credentials. Only the Storage client is thus authenticated.
I recommend a (simpler) solution: Use ADC and have Storage and Text2Speech clients both use ADC.
You will need to set the GOOGLE_APPLICATION_CREDENTIALS env var to the path to a key if you run your code off GCP (i.e. not on GCE or similar) but when it runs on GCP, it will leverage the service's credentials.
You will need to create both the Storage and Text2Speech clients to use ADCs:
See:
Cloud Storage
Text-to-Speech
Storage storage = StorageOptions.getDefaultInstance().getService();
...
And:
TextToSpeechClient textToSpeechClient = TextToSpeechClient.create()
...

AWS DotNet Core credentials

I have an existing dotnet core web application that I need to use a profile other than [default] when I'm developing locally.
I'm running into an issue in that the location of credential file appears to not be defaulted yet to ~/.aws/credentials. Based on the credential lookup sequence check 2 should work if I set the value of AWSConfigs.AWSProfileName before creating the SSM Client but it doesn't and just falls through the remaining flow and throws an error saying it can't find the EC2 Meta Data. The same is the case for check 3. When the credentials are in the [default] definition check 4 will succeed which I expected would fail as well if defaults haven't been initialized yet. I have multiple AWS accounts that I get temporary security tokens from an SSO system based on the config file and because of temporary token requirement I can't use the [default] profile as I need to be able to switch between them to run the same code base.
I've been able to get around this by explicitly accessing the credential store and generating a set of credentials to pass into the constructor for the SSM Client.
Amazon.Runtime.CredentialManagement.CredentialProfile developerProfile;
AmazonSimpleSystemsManagementClient ssmClient;
if (new Amazon.Runtime.CredentialManagement.SharedCredentialsFile().TryGetProfile(Configuration["AWS:Profile"], out developerProfile)) //Test to determine if we have local credentials file with a profile
{
AWSCredentials credentials = new Amazon.Runtime.SessionAWSCredentials(developerProfile.Options.AccessKey, developerProfile.Options.SecretKey, developerProfile.Options.Token);
ssmClient = new AmazonSimpleSystemsManagementClient(credentials, developerProfile.Region);
}
else
{
ssmClient = new AmazonSimpleSystemsManagementClient(Region);
}
The above snippet is designed to allow for running locally with a specific profile and file location and when either do not exist assumes that it's running in an EC2 or ECS environment and can source the credentials from the metadata.
The location of the code that needs access AWS' Parameter Store in located in the Startup method so other properties can be initialized before the ConfigureServices method is run. I have additional AWS services that I initialize a client for that work as expected after the ConfigureServices has run. Should I not expect the credential provider to be properly initialized before the ConfigureServices method is run?

How can I provision IIS on EC2 Windows with a resource?

I have just started working on a project that is hosted on an AWS EC2 Windows Instance with an IIS. I want to move this setup to more reliable place, and one of the first things I wanted to do was to move away from snowflake servers that are setup and configured by hand.
So started looking at Terraform from Hashicorp. My thought was that I could define the entire setup including network etc in Terraform and that way make sure it was configured correctly.
I thought I would start with defining a server. A simple Windows Server instance with an IIS installed. But this is where I run into my first problems. I thought I could configure the IIS from Terraform. I guess you can't. So my next thought was to combine Terraform with Powershell Desired State Configuration.
I can setup an IIS server on a box using DSC. But I am stuck invoking DSC from Terraform. I can provision a vanilla server easily. I have tried looking for a good blog post on how to use DSC in combination with Terraform, but I can't find one that explains how to do it.
Can anyone point me towards a good place to read up on this? Or alternatively if the reason I can't find this is that it is just bad practice and I should do it in another way, then please educate me.
Thanks
How can I provision IIS on EC2 Windows with a resource?
You can run arbitrary PowerShell scripts on startup as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = <<-EOF
<powershell>
$file = $env:SystemRoot + "\Temp\${var.some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
EOF
//...
}
You'll need a variable like this defined to use that (I'm providing a more complex example so there's a more useful starting point)
variable "some_variable" {
type = string
default = "UserDataTestFile"
}
Instead of creating a timestamp file like the example above, you can invoke DSC to set up IIS as you normally would interactively from PowerShell on a server.
You can read more about user_data on Windows here:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html
user_data will include your PowerShell directly.
You can use a templatefile("${module.path}/user-data.ps1, {some_variable = var.some_variable}) instead of an inline script as above.
Have user-data.ps1 in the same directory as the TF file that references it:
<powershell>
$file = $env:SystemRoot + "\Temp\${some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
You still need the <powershell></powershell> tags around your script source code. That's a requirement of how Windows on EC2 expects PowerShell user-data scripts.
And then update your TF file as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = templatefile("${module.path}/user-data.ps1, {
some_variable = var.some_variable
})
//...
}
Note that in the file read by templatefile has variables like some_variable and NOT var.some_variable.
Read more about templatefile here:
https://www.terraform.io/docs/configuration/functions/templatefile.html

How do I set up AWS credentials on Media Temple DV

I am having a hard time setting up the credentials for AWS S3 usage via aws-php-sdk within Media Temple.
I continue to receive the error: Cannot read credentials from /.aws/credentials
I tried to follow the guide to install the AWS CLI via https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html. I then used the following to set the credentials via https://docs.aws.amazon.com/cli/latest/userguide/tutorial-ec2-ubuntu.html#configure-cli
... But I get that error still.
I then had a chat with Media Temple support, who created the .aws/credentials file in root, but then the error message changed to:
Warning: is_readable(): open_basedir restriction in effect. File(/.aws/credentials) is not within the allowed path(s)
MT advised me to not change the basedir settings. They also advised me to simply change where the credentials are read from if possible.
Anyone successfully use AWS credentials on MT?
Trying to do this with the AWS CLI via SSH was like beating my head against a brick wall on Media Temple.
I then tried to set the credentials via environment variables, but that was a no-go.
I then got the idea to put the credentials file within a directory that PHP could access. However, I had to set the location where aws-php-sdk would look for it. I found the environment variable within some documentation and tried to set the variable via php's setenv() function. No dice.
I then searched the aws-php-sdk for the initial error I was seeing, backtracked until I could find where the credentials file location was being set. Turns out the documentation was wrong and the correct environment variable name was HOME.
In the end, all that was needed was to set HOME prior to using AWS. Easy enough, but should have been 100x easier to figure out. Something along these lines:
// Set environment variable for credentials location
putenv('HOME=../');
// Set bucket name
$this->bucket = $bucket;
// Create an S3Client
$this->s3Client = new Aws\S3\S3Client([
'profile' => $this->profile,
'version' => $this->version,
'region' => $this->region
]);