Salesforce LWC Editor in browser

Salesforce made a great move by making the Web Components part of the eco system by introducing Lightning Web Component(LWC). But for some reason, they decided not to provide the ability to code it directly from their developer console. Currently the only way to do it is, is by using the VS Code. This is great, but if you’re in a hurry and want to make some quick changes, it won’t be easy.

To resolve this, I decided to make a quick-to-use LWC editor in my chrome extension Salesforce Advanced Code Searcher.

How to use:

  1. Install the extension by going to : https://chrome.google.com/webstore/detail/salesforce-advanced-code/lnkgcmpjkkkeffambkllliefdpjdklmi
  2. Navigate to your salesforce setup Page. You should see a “Click here to authorize” button. Click on it to authorize the extension.
Image for post

3. Once the authorization is complete, Navigate to the LWC Editor tab and you should see all the Components that already exists in the Org. In case there are none then you can start by creating one.

Image for post

What can you do?

  1. You can edit the components and save it back.

Look for the extension that you are looking for in the left file explorer, click open the link and that should open the file in a new tab. Make the required changes and hit Ctrl+s (PC) or Cmd +s (Mac) to save the change. In case there are errors it will be displayed in the console log right below the editor.

2. You can add a file in an existing components.

Image for post
Image for post

Right-click on the component where you want to add a file. You will be presented with 2 options : New File & Delete. Click on the New File to open a modal dialog where you can key in the file name (do not add an extension). Click on the Create File button and it will add the file under the component.

3. You can delete the files in the component.

Image for post

5. You can also change the theme:

Image for post

Running PMD against Apex & Triggers.

It is always a best practice to ensure you comply with the coding standards when you write your apex classes or triggers and you probably already make sure you do that. But how do you make sure to your entire team is complying to the coding standard, how do you make sure your team, your company is delivering the best to the client?

The answer to this is Static code analysis. There are n number of tools available in the market which will run the static code analysis against salesforce code base. Few of them:

  1. PMD
  2. Checkmarx
  3. Fortify

Both Checkmarx & fortify are paid whereas PMD is completely free of cost. The next question that comes to your mind: How do I run PMD against my code base?

Well, there are multiple options:

  1. You can download the executable, configure it, identify the rules and run it locally everytime you want to push the code to UAT or Production.
  2. You and your entire team installs the Apex PMD VScode plugin and this will throw the error everytime your code. But again apex can also be coded using dev console or other online IDEs like aside.io so you cannot always rely on your teammates to use Apex PMD.
  3. The simplest option, install the Salesforce Advanced Code searcher Chrome extension which once installed will allow you to run the PMD checks right inside your browser. It will spit out the results in an excel sheet which you can then analyze and assign to team members to fix. And this will work in sandboxes and Production org.

So, what are you waiting for?? Install the plugin and discover the mess your developers have created 🙂

P.S: This extension also allows you to run static code analysis against your aura components.

Error: An object ‘customMetadata.record.md’ of type CustomMetadata was named in package.xml, but was not found in zipped directory

This issue pops up when we try to deploy the customMetadata from One Org to another . The process I followed was to retrieve the components from Dev using ANT and then deploy it to QA using ANT. Retrieval of the components worked without any issue, but the name with which the components were retrieved was wrong.

componentFailure

If we observe carefully we do not see the __mdt in the extracted metadata but the deploy method expects the metadata name to have ‘__mdt’

After Retrieve.

beforeRetrieve

Just ensure to modify the name of the custom metadata to add the ‘__mdt’ to the file name and that should do the trick.

beforeDeployment

Hope it helps!

node.js + ALM + REST API = Awesome

Everyday you see people (or yourself) doing some mundane task and you wonder how can you help them automate that process. So this article highlights something I did a few months back to help a friend who used to spend considerable amount of time fetching data from ALM (Earlier know as QC) formatting it and putting in some other database.

I always wanted to use node.js in something useful, so I decided to set up a server to get data from ALM server and push it to other database. Until this point I had no idea if ALM had exposed any API’s. Luckily they had opened up access to ALM 11.00 and higher version via their REST API. I wanted everything on server side and there was no UI involved.

So I was able to come up with a small script which authenticates the user into ALM and then get all the defect related to a release. I have provided the comments where ever necessary in the below code. I am still a noob in node.js so please excuse me if I have made any mistake


var https = require('https'),
	fs = require('fs'),
	config = JSON.parse(fs.readFileSync('config.json'));//this refers to a file where I have all my config like host, userName, password Etc

//this is added to avoid the TLS error. Uncomment if you get a TLS error while authenticating.
//process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0';

//set the correct options for the call.
var options = {
	host : config.host, 
	path : "/qcbin/authentication-point/authenticate",
	method: "GET",
	headers : {'Authorization': 'Basic '+new Buffer(config.alm_userName + ':' + config.alm_password).toString('base64')}
	};
	//authenticating the user into ALM
	ALMConnect(options, 'header','', function(status, data){
		if(status){
			//get the LWSSO_Cookie from the header. This is the session cookie which will be used in all callouts to ALM.
			if(data.headers["set-cookie"] != undefined ) {
				extractDefects(data.headers["set-cookie"]);
			}else{
				console.log('Dagnabbit!! ERROR:  Unable to login, check your username/password/serverURL.');
			}
		}else{
			console.log('Dagnabbit!! ERROR:  ' + JSON.stringify(data));
		}
	});

//Function to extract the defects for analysis.
function extractDefects(LWSSO_Cookie){
	var queryParam = "{";
	//add Release
	queryParam += "detected-in-rel["+config.release+"];";
	//add all your request parameters here. Its a little complicated initially, but you will get a hang of it. 
	// Make sure to use encodeURIComponents() for all the values in the query parameters.
	queryParam+="}";
	//get all the fields that you want to query. Lesser the fields smaller the XML returned, faster is the call.
	var fields = config.defectFieldMapping.fieldArray.join(',');
	var opt = {
		host: config.host,
		path: "/qcbin/rest/domains/"+config.domain+"/projects/"+config.project+"/defects?query="+queryParam+"&fields="+fields+"&page-size=max",
		method:"GET",
		headers: {"Cookie":LWSSO_Cookie}
	};

	ALMConnect(opt, 'data','',function(status,data){
		if(status){
                        //write the defects to an XML file in local drive.
			fs.writeFileSync('newDefect.xml',data);
			//once you get the defectXML you can parse it into JSON and push it other databases like SFDC etc..		
		}else{
			console.log('Dagnabbit!! ERROR:  ' + JSON.stringify(data));
		}
	});
}

function ALMConnect(opt, responseType,requestBody, callback){

	var request = https.request(opt, function(res){
		res.setEncoding('utf8');
		var XMLoutput='';
		res.on('data',function(chunk){
			XMLoutput+=chunk;
		});
		res.on('end',function(){
			if(responseType=='data'){
				callback(true,XMLoutput);
			}else {
				callback(true, res);
			}
		});
	});
	request.on('error',function(e){
		callback(false,e);
	});
	if(opt.method=='POST' || opt.method == 'PUT'){
		request.write(requestBody);
	}
	request.end();
}


Uploaded another extension on Chrome : Salesforce code Coverage extractor

I have taken this extension out now.. Need to implement some changes. . Will be back up soon

This extension will help admin / developers to extract the overall code coverage from the developer console. All they have to do is hover over the Overall code coverage section and then click on the [Extract Data] link and then you have the csv downloaded. 

You can also extract SOQL and SOSL results in the same manner from the developer console.

You can get the extension here:

https://chrome.google.com/webstore/detail/salesforce-code-coverage/lmndofengngdnomiikmhmhfddhgahnio