Connecting to Salesforce using Pub / Sub API (gRPC)

pub / sub API is currently in pilot. Talk to your AE to have it enabled in your org. Below implementation may change once the features are GA.

Pub/Sub API has been the talk of the town for some and I got to play with it recently. There are some code example in the pub / sub API pilot documentation but that’s only available in Python. Being a Node.js enthusiast I wanted to establish the pub / sub api using node.js.

npm packages
jsforce -> To get the authentication details.
avro-js -> Converting the binary response from subscribe call to json & vice-versa
grpc -> Core GRPC library
@grpc/proto-loader -> This library is used to load the proto file.

// npm packages & initialization
const grpc = require("grpc");
const fs = require("fs");
const jsforce = require("jsforce");
const avro = require("avro-js");
const protoLoader = require("@grpc/proto-loader");
const packageDef = protoLoader.loadSync("sf.proto", {});
const grpcObj = grpc.loadPackageDefinition(packageDef);
const sfdcPackage = grpcObj.eventbus.v1;
const root_cert = fs.readFileSync("roots.cer");
view raw npmPackage.js hosted with ❤ by GitHub

Authenticating & connecting to Salesforce
As of now pub/sub api do not provide any method / rpc for authenticating the user. You will have to perform the login call using jsforce library or directly via rest api to get the details. The details that are required are
– accesstoken
– instanceurl
– orgid
The above details will be set as metadata headers and will be used to setup the initial connection with the grpc server.

const conn = new jsforce.Connection({
loginUrl: "https://test.salesforce.com",
});
const connectionResult = await conn.login(username, password);
const orgId = connectionResult.organizationId;
const metaCallback = (_params, callback) => {
const meta = new grpc.Metadata();
meta.add("accesstoken", conn.accessToken);
meta.add("instanceurl", conn.instanceUrl);
meta.add("tenantid", orgId);
callback(null, meta);
};
const callCreds = grpc.credentials.createFromMetadataGenerator(metaCallback);
const combCreds = grpc.credentials.combineChannelCredentials(
grpc.credentials.createSsl(root_cert),
callCreds
);
const client = new sfdcPackage.PubSub(
"api.pilot.pubsub.salesforce.com:7443",
combCreds
);
view raw sfgrpc.js hosted with ❤ by GitHub

Topic Info & Schema Info
Salesforce transmits the data in avro binary format. To read this format we would need the schema information to convert the binary to readable json format. The salesforce proto file provides 2 rpc for the same.
GetTopic -> Pass the topic name (Platform event or CDC name) to get the schemaId. This schemaId will be passed to the GetSchema rpc to get the actual schema.
GetSchema -> Pass the schema Id retrieved in the previous GetTopic call to get the schema which will be used for avro conversion.

const topicName = '/event/Your_Custom_event__e';
let schemaId = '';
let schema;
client.GetTopic({ topicName: topicName }, (err, response) => {
if(err) {
//throw error
}else { //get the schema information
schemaId = response.schemaId
client.GetSchema({ schemaId: schemaId }, (error, res) => {
if(error) {
//handle error
}else {
schema = avro.parse(res.schemaJson)
}
})
}
})

Subscribe
Now that we all the prerequisites , let do the actual thing, subscribing to a platform event. One thing to note here is that both the input & output of the Subscribe rpc are streams (BiDirectional Stream).

const subscription = client.Subscribe(); //client here is the grpc client.
//Since this is a stream, you can call the write method multiple times.
//Only the required data is being passed here, the topic name & the numReqested
//Once the system has received the events == to numReqested then the stream will end.
subscription.write({
topicName: "/event/Your_platform_event__e",
numRequested: 10,
});
//listen to new events.
subscription.on("data", function (data) {
console.log("data => ", data);
//data.events is an array of events. Below I am just parsing the first event.
//Please add the logic to handle mutiple events.
if (data.events) {
const payload = data.events[0].event.payload;
let jsonData = schema.fromBuffer(payload);//this schema is the same which we retreived earlier in the GetSchema rpc.
console.log("Event details ==> ", jsonData);
} else {
//if there are no events then every 270 seconds the system will keep publishing the latestReplayId.
}
});
subscription.on("end", function () {
console.log("ended");
});
subscription.on("error", function (err) {
console.log("error", JSON.stringify(err)); //handle errors
});
subscription.on("status", function (status) {
console.log("status ==> ", status);
});
view raw subscription.js hosted with ❤ by GitHub

Publishing an event
For publishing an event salesforce’s proto offers 2 rpc (bidirectional stream & unary). We will use the unary rpc to publish an event.

const dataToPublish = {
Some_Text_Field__c: { string: "Test" },
Another_Boolean_Field__c: { boolean: false },
CreatedDate: new Date().getTime(), //This is Required
CreatedById: "0057X000003ilJfQAI", //Id of the current user. This is required.
};
client.Publish(
{
topicName: "/event/Your_Custom_Event__e",
events: [
{
id: "124",// this can be any string
schemaId: schemaId, //This is the same schemaId that we retrieved earlier.
payload: schema.toBuffer(dataToPublish), //The schema here is the avro generated schema
},
],
},
function (err, response) {
if (err) {
console.log("error from publishing ", err);
} else {
console.log("Response ", response);
}
}
);

Hope it helps!

Salesforce File Migration (ContentVersion) via Node.js

As part of a project I was assigned the task to move content version from one org (legacy) to another. For this purpose I used node.js as my go-to language to achieve this. First task was to download the file from legacy org and then subsequently upload the same to the active org without storing the file in the local file system.

One important thing to note here is to insert the files in content version there are 2 different approaches
– Use the JSON structure to insert a record in the content version for content lesser than 37Mb.
– Use the multipart method to insert the blob if the size of the content is greater than 37Mb.

In the below code you will see that I am fetching the latest content version file and inserting it back in the same org. But you can obviously connect with another org and push the file into that org.

const jsforce = require("jsforce");
const axios = require("axios");
var FormData = require("form-data");
const getStream = require("get-stream");
const mime = require("mime-types");
const conn = new jsforce.Connection({
loginUrl: "https://login.salesforce.com",
});
const username = "";
const password = "";
const main = async () => {
await conn.login(username, password);
const sourceContentVersionFile =
await conn.query(`SELECT Id, Title, ContentSize, VersionData, PathOnClient
FROM ContentVersion
ORDER BY CreatedDate DESC
LIMIT 1`);
const contentVersionRecord = sourceContentVersionFile.records[0];
if (contentVersionRecord.ContentSize > 37000000) {
// size greater than 37Mb use multipart blob insert.
const fileStream = await getFile(contentVersionRecord, false);
const formData = createFormData(contentVersionRecord, fileStream);
const URL =
conn.instanceUrl + "/services/data/v51.0/sobjects/ContentVersion";
await axios({
method: "post",
maxContentLength: Infinity,
maxBodyLength: Infinity,
url: URL,
headers: {
Authorization: "Bearer " + conn.accessToken,
"Content-Type": `multipart/form-data; boundary=\"boundary_string\"`,
},
data: formData,
});
} else {
const base64Body = await getFile(contentVersionRecord, true);
await conn.sobject("ContentVersion").insert({
Title: contentVersionRecord.Title,
PathOnClient: contentVersionRecord.PathOnClient,
VersionData: base64Body,
FirstPublishLocationId: "0012w00000rTbXNAA0", //Id to which the content version needs to be linked
Origin: "H",
});
}
};
const getFile = async (data, generateBase64String) => {
const file = await axios({
method: "get",
url: conn.instanceUrl + data.VersionData,
headers: {
Authorization: "Bearer " + conn.accessToken,
},
responseType: "stream",
});
if (generateBase64String) {
return await getStream(file.data, { encoding: "base64" });
} else {
return file.data; // return the stream;
}
};
const createFormData = (data, file) => {
const contentVersion = {
FirstPublishLocationId: "0012w00000rTbXNAA0",
Title: data.Title,
PathOnClient: data.PathOnClient,
Origin: "H",
};
const form = new FormData();
form.setBoundary("boundary_string");
form.append("entity_content", JSON.stringify(contentVersion), {
contentType: "application/json",
});
form.append("VersionData", file, {
filename: data.PathOnClient,
contentType: mime.lookup(data.PathOnClient),
});
return form;
};
main();
view raw main.js hosted with ❤ by GitHub

Couple of things:
– The above code shows the migration of a single file, however the same logic can be used to migrate any number of files.
– I am pulling and pushing the file in the same org. You can however connect to multiple salesforce Orgs and move the file around.

Hope it helps!

Salesforce LWC Editor in browser

Salesforce made a great move by making the Web Components part of the eco system by introducing Lightning Web Component(LWC). But for some reason, they decided not to provide the ability to code it directly from their developer console. Currently the only way to do it is, is by using the VS Code. This is great, but if you’re in a hurry and want to make some quick changes, it won’t be easy.

To resolve this, I decided to make a quick-to-use LWC editor in my chrome extension Salesforce Advanced Code Searcher.

How to use:

  1. Install the extension by going to : https://chrome.google.com/webstore/detail/salesforce-advanced-code/lnkgcmpjkkkeffambkllliefdpjdklmi
  2. Navigate to your salesforce setup Page. You should see a “Click here to authorize” button. Click on it to authorize the extension.
Image for post

3. Once the authorization is complete, Navigate to the LWC Editor tab and you should see all the Components that already exists in the Org. In case there are none then you can start by creating one.

Image for post

What can you do?

  1. You can edit the components and save it back.

Look for the extension that you are looking for in the left file explorer, click open the link and that should open the file in a new tab. Make the required changes and hit Ctrl+s (PC) or Cmd +s (Mac) to save the change. In case there are errors it will be displayed in the console log right below the editor.

2. You can add a file in an existing components.

Image for post
Image for post

Right-click on the component where you want to add a file. You will be presented with 2 options : New File & Delete. Click on the New File to open a modal dialog where you can key in the file name (do not add an extension). Click on the Create File button and it will add the file under the component.

3. You can delete the files in the component.

Image for post

5. You can also change the theme:

Image for post

Running PMD against Apex & Triggers.

It is always a best practice to ensure you comply with the coding standards when you write your apex classes or triggers and you probably already make sure you do that. But how do you make sure to your entire team is complying to the coding standard, how do you make sure your team, your company is delivering the best to the client?

The answer to this is Static code analysis. There are n number of tools available in the market which will run the static code analysis against salesforce code base. Few of them:

  1. PMD
  2. Checkmarx
  3. Fortify

Both Checkmarx & fortify are paid whereas PMD is completely free of cost. The next question that comes to your mind: How do I run PMD against my code base?

Well, there are multiple options:

  1. You can download the executable, configure it, identify the rules and run it locally everytime you want to push the code to UAT or Production.
  2. You and your entire team installs the Apex PMD VScode plugin and this will throw the error everytime your code. But again apex can also be coded using dev console or other online IDEs like aside.io so you cannot always rely on your teammates to use Apex PMD.
  3. The simplest option, install the Salesforce Advanced Code searcher Chrome extension which once installed will allow you to run the PMD checks right inside your browser. It will spit out the results in an excel sheet which you can then analyze and assign to team members to fix. And this will work in sandboxes and Production org.

So, what are you waiting for?? Install the plugin and discover the mess your developers have created 🙂

P.S: This extension also allows you to run static code analysis against your aura components.

zForce- A simple app connecting Zillow and Salesforce

Built an app which will allow the users to connect zillow directly from Salesforce. Some of the features supported by the applications are:

  •  Fully Supported on lightning experience and Salesforce 1
  • Absolutely no cost for using the application
  •  Easily connect Salesforce and Zillow
  •  Supported on all Custom and Standard Objects
  •  Drag and Drop the component on any lightning Page
  •  Easy setup using the setting tab

Please click below to install the app:

https://satrag.herokuapp.com/

c3 charts and LWC

I wanted to try out some charting on LWC and I thought what better framework than C3. In this post we will see a component which will display opportunities in various statues associated to an account.

accountCHart

As a prerequisite you would need c3 and d3 (c3 has a dependency on d3.js) framework loaded in static resource for the charting to work .

Download the latest version of the C3 and D3 from below.

C3 -> https://github.com/c3js/c3/releases/latest

D3 -> https://github.com/d3/d3/releases/download/v5.9.7/d3.zip

Couple of important points for charting:

  1. Load the dependency in the same order that I mentioned below. D3 first and then follow it up with C3.
  2. We have added the a parameter t and assigned the current time to it.
    This is done to make sure the cache is refreshed every time the scripts are loaded.
    If we do not do this then there is an issues if we add multiple charts in a single component.


<template>
<!–Render the chart only when the opportunity data is available.–>
<template if:true={opportunityData}>
<!–We need not have another component for chart,
I thought it would be a better idea seperate out the charting component
for better readability–>
<c-opportunity-pie-chart chart-data={opportunityData}></c-opportunity-pie-chart>
</template>
</template>


import { LightningElement,api,wire,track } from 'lwc';
import opportunityAggregation from '@salesforce/apex/OpportunityDataService.aggregateOpportunities'
export default class OpportunityChartContainer extends LightningElement {
@api recordId; //As the component is loaded on a account detail page, we will get access to the account Id.
@track opportunityData=[];//Chart accepts data in multiple formats, array, object etc. We are using array here.
//get the data from the backend
@wire(opportunityAggregation,{accountId : '$recordId'}) opptyData({error, data}){
if(data){
let oppData = Object.assign({},data);
/*
this.opportunityData will in the below format
[['Closed Won',2],['Closed Lost',4],['Negotiation',5]];
*/
for(let key in oppData){
if(oppData.hasOwnProperty(key)){
let tempData=[key, oppData[key]];
this.opportunityData.push(tempData);
}
}
}
if(error)
console.log(error);
}
}


<?xml version="1.0" encoding="UTF-8"?>
<LightningComponentBundle xmlns="http://soap.sforce.com/2006/04/metadata" fqn="opportunityChartContainer">
<apiVersion>46.0</apiVersion>
<isExposed>true</isExposed>
<targets>
<target>lightning__RecordPage</target>
<target>lightning__AppPage</target>
<target>lightning__HomePage</target>
</targets>
</LightningComponentBundle>


public with sharing class OpportunityDataService {
@AuraEnabled(cacheable=true)
public static Map<String, Integer> aggregateOpportunities(String accountId){
Map<String, Integer> opportunityStatusMap = new Map<String, Integer>();
//Aggregate the opportunities.
for(AggregateResult aggr : [SELECT Count(Id), StageName
FROM Opportunity
WHERE AccountId=:accountId
GROUP BY StageName]) {
opportunityStatusMap.put((String)(aggr.get('StageName')), (Integer)(aggr.get('expr0')));
}
return opportunityStatusMap;
}
}


<template>
<lightning-card title="Opportunities">
<!–C3 manipulates the dom and in order for it to do so we need to add the lwc:dom attribute–>
<div lwc:dom="manual" class="c3"></div>
</lightning-card>
</template>


import { LightningElement, api,track } from 'lwc';
import { loadStyle, loadScript } from 'lightning/platformResourceLoader';
import C3 from '@salesforce/resourceUrl/c3';//load the c3 and d3 from the static resources.
import D3 from '@salesforce/resourceUrl/d3';
export default class OpportunityPieChart extends LightningElement {
@api chartData;
@track chart;
@track loadingInitialized=false;
librariesLoaded = false;
//This is used called when the DOM is rendered.
renderedCallback() {
if(this.librariesLoaded)
this.drawChart();
//this.librariesLoaded = true;
if(!this.loadingInitialized) {
this.loadingInitialized = true;
/*
We have added the a parameter t and assigned the current time to it.
This is done to make sure the cache is refrehed everytime the scripts are loaded.
If we do not do this then there is an issues if we add multiple charts in a single component.
*/
Promise.all([
loadScript(this, D3 + '/d3.min.js?t='+new Date().getTime()),
loadScript(this, C3 + '/c3-0.7.4/c3.min.js?t='+new Date().getTime()),
loadStyle(this, C3 + '/c3-0.7.4/c3.min.css?t='+new Date().getTime()),
])
.then(() => {
this.librariesLoaded = true;
//Call the charting method when all the dependent javascript libraries are loaded.
this.drawChart();
})
.catch(error => {
console.log('The error ', error);
});
}
}
drawChart() {
this.chart = c3.generate({
bindto: this.template.querySelector('div.c3'),//Select the div to which the chart will be bound to.
data: {
columns: this.chartData,
type : 'donut'
},
donut : {
title : 'Opportunities',
label: {
format: function(value, ratio, id) {
return value;
}
}
}
});
}
//destroy the chart once the elemnt is destroyed
disconnectedCallback() {
this.chart.destroy();
}
}

This is just a simple example for one to start with C3. Please share the beautiful charts that you build. Enjoy Charting!!!

LWC-Lightning Datatable sorting

This is a quick post to understand how sorting is accomplished in lightning datatable (LWC).

Lightning datatable provides an onsort attribute which will allow us to implement the sorting for the elements in the table. You could implement sorting locally or via making a server call.

Local Sorting

You would generally implement this type of sorting if you know that the list of elements is going to be small and limited. For example, if you’re displaying a list of approval history records for a particular record, you know that the number of records won’t be more (assuming you have a simple approval process).

In the example below, we have created a simple component which displays the list of the contacts (firstName and lastName). When the header column is clicked the onsort handler is invoked, the field and the sort order is passed and using this information the list is sorted.


<template>
<lightning-datatable
key-field="Id"
data={results}
columns={columns}
hide-checkbox-column ="false"
show-row-number-column="true"
onsort={updateColumnSorting}
sorted-by={sortBy}
sorted-direction={sortDirection}
row-number-offset ="0" >
</lightning-datatable>
</template>


import { LightningElement,track,wire } from 'lwc';
import fetchContact from '@salesforce/apex/ContactList.fetchContactLocal';
const dataTablecolumns = [{
label: 'First Name',
fieldName: 'FirstName',
sortable : true,
type: 'text'
},
{
label : 'Last Name',
fieldName : 'LastName',
type : 'text',
sortable : true
}]
export default class ContactListLocalSort extends LightningElement {
@track results=[];
@track columns = dataTablecolumns;
@track sortBy='FirstName';
@track sortDirection='asc';
@wire(fetchContact) contactList({error, data}) {
if(data)
this.results=Object.assign([], data);
if(error)
console.log(error);
}
updateColumnSorting(event){
let fieldName = event.detail.fieldName;
let sortDirection = event.detail.sortDirection;
//assign the values
this.sortBy = fieldName;
this.sortDirection = sortDirection;
//call the custom sort method.
this.sortData(fieldName, sortDirection);
}
//This sorting logic here is very simple. This will be useful only for text or number field.
// You will need to implement custom logic for handling different types of field.
sortData(fieldName, sortDirection) {
let sortResult = Object.assign([], this.results);
this.results = sortResult.sort(function(a,b){
if(a[fieldName] < b[fieldName])
return sortDirection === 'asc' ? -1 : 1;
else if(a[fieldName] > b[fieldName])
return sortDirection === 'asc' ? 1 : -1;
else
return 0;
})
}
}

https://gist.github.com/jungleeforce/7ffb0e94bf0d5641d162f6295ac9df9f.js

Sorting via server call

As you would have guessed this would be useful when you have a huge list and you are displaying a small chunk of the total records.

The major advantage of this method is that you need not implement any custom logic at all. The sorting will be completely handled by the SOQL. Another thing to note here is that you will have to disable the data-table so that the user’s do not click on the header in the time the server call is being executed.


<template>
<lightning-datatable
key-field="Id"
data={results}
columns={columns}
hide-checkbox-column ="false"
show-row-number-column="true"
onsort={updateColumnSorting}
sorted-by={sortBy}
sorted-direction={sortDirection}
row-number-offset ="0" >
</lightning-datatable>
</template>


import { LightningElement,track,wire } from 'lwc';
import fetchContact from '@salesforce/apex/ContactList.fetchContact';
const dataTablecolumns = [{
label: 'First Name',
fieldName: 'FirstName',
sortable : true,
type: 'text'
},
{
label : 'Last Name',
fieldName : 'LastName',
type : 'text',
sortable : true
}]
export default class ContactListServerSort extends LightningElement {
@track results=[];
@track columns = dataTablecolumns;
@track sortBy='FirstName';
@track sortDirection='asc';
//since we have used the dynamic wiring,
//the list is automatically refreshed when the sort direction or order changes.
@wire(fetchContact,{field : '$sortBy',sortOrder : '$sortDirection'}) contactList({error, data}) {
if(data)
this.results=Object.assign([], data);
if(error)
console.log(error);
}
updateColumnSorting(event){
let fieldName = event.detail.fieldName;
let sortDirection = event.detail.sortDirection;
//assign the values. This will trigger the wire method to reload.
this.sortBy = fieldName;
this.sortDirection = sortDirection;
}
}

Apex class


public with sharing class ContactList {
@AuraEnabled (Cacheable=true)
public static List<Contact> fetchContactLocal(){
return [SELECT Id, FirstName, LastName
FROM CONTACT Order By FirstName ASC];
}
@AuraEnabled (Cacheable=true)
public static List<Contact> fetchContact(String field, String sortOrder){
return Database.query('SELECT Id, FirstName, LastName FROM Contact ORDER BY '+field+' '+sortOrder);
}
}

view raw

ContactList.cls

hosted with ❤ by GitHub

Error: An object ‘customMetadata.record.md’ of type CustomMetadata was named in package.xml, but was not found in zipped directory

This issue pops up when we try to deploy the customMetadata from One Org to another . The process I followed was to retrieve the components from Dev using ANT and then deploy it to QA using ANT. Retrieval of the components worked without any issue, but the name with which the components were retrieved was wrong.

componentFailure

If we observe carefully we do not see the __mdt in the extracted metadata but the deploy method expects the metadata name to have ‘__mdt’

After Retrieve.

beforeRetrieve

Just ensure to modify the name of the custom metadata to add the ‘__mdt’ to the file name and that should do the trick.

beforeDeployment

Hope it helps!

MultiFile upload / Drag and drop Attachments

Attaching multiple files as attachments to records is a pain. Its takes lot of time plus lot of navigation. Today, I want to share a small piece of code that I have written as home page component which will inject a small section in the ‘Notes and Attachments’ related list which will allow the users to load multiple file either selecting it from the input file selection or the users can drag and drop the files.

dragndrop

So how do you implement this in your org.

1) Add the main script to a js file and add it to static resource.

2) Create a new Custom Link.

3) Reference the js file in the custom link you created in the previous step. We will also need jQuery, so we will add that as well.

4) Add that custom link a home page component of type Links.

5) Add the home page component to the home page layout and you’re done. For more details on this hacking method, you can check this link here

So here we go: ->

1) Create a javascript file with the below code and add it as a static resource with the name attachmentjs

jQuery(function(){

    /*Checks when the notes and attachment section is loaded and then initiates the process.*/
    var timeInterval = setInterval(function(){
        if(jQuery("div.bRelatedList[id$=RelatedNoteList]").find("div.pbHeader").find("td.pbButton").find("input[name=attachFile]").length > 0){
            addAttachButton();
            clearInterval(timeInterval);
        }
    },100);


});

/*Adds the drop zone div in the notes and attachment section. Event listener are added to listen when files are dropped in the zone.*/
function addAttachButton() {

    var attachmentDiv = jQuery("div.bRelatedList[id$=RelatedNoteList]");
    insertButton();

    attachmentDiv.find("div.pbHeader").find("td.pbButton").after(
        jQuery("<td>", {
            style   : "width:35%;cursor:pointer;"
        }).append(jQuery("<div>",{
                style : "height: 30px;  width: px;border-color: orange;border: 3px solid orange;border-style: dotted;border-radius: 4px;text-align: center;vertical-align: middle;line-height: 2;color: Red;font-family: monospace;font-size: 14px;",
                id : "dropDiv"
            }).append(jQuery("<span>",{id:"clickHere"}).text('Drop files here / click here!') , jQuery("<span>",{id:"uploadingMessage",style:"display:none;"}).text('Uploading your Files, please wait!'))
        )
    );

    jQuery("#dropDiv").on('click',function(){
        if(jQuery("#uploadingMessage:hidden").length > 0) {
            if(jQuery("#multiUploadButton").length > 0) {
                jQuery("#multiUploadButton")[0].click();
            }else {
                insertButton();
                jQuery("#multiUploadButton")[0].click();
            }
        } else {
            alert('Your files are being uploaded. Please Wait!');
        }
    });

    function handleFileSelect(evt) {
        evt.stopPropagation();
        evt.preventDefault();
        var files = evt.dataTransfer.files; // FileList object.
        processFiles(files);
    }    
    function handleDragOver(evt) {
        evt.stopPropagation();
        evt.preventDefault();
        evt.dataTransfer.dropEffect = 'copy'; // Explicitly show this is a copy.
    }

    var dropZone = document.getElementById('dropDiv');
    dropZone.addEventListener('dragover', handleDragOver, false);
    dropZone.addEventListener('drop', handleFileSelect, false);
}


function processFiles(files) {
    hideDiv();
    var insertedFilesArray = [];
    var fileInsertionCounter = 0;

        for(i=0;i<files.length;i++) {
                    
            var reader = new FileReader();
            reader.onload = (function(newFile,pId) {
                return function(e) {
                    var attachmentBody = e.target.result.substring(e.target.result.indexOf(',')+1,e.target.result.length);
                        
                    jQuery.ajax({
                        type: "POST",
                        url: "https://"+window.location.host+"/services/data/v33.0/sobjects/Attachment/",
                        contentType:"application/json; charset=utf-8",
                        headers: { 
                            "Authorization" : "Bearer "+getCookie('sid')
                        },
                        data    : JSON.stringify({'name':newFile.name,'body':attachmentBody,'parentId':pId,'isprivate':false})
                    }).success(function(result) {
                        fileInsertionCounter++;
                        if(fileInsertionCounter == files.length){
                            showDiv();
                        }
                    }).fail(function(err){
                        alert('Unable to Insert file \n Contact your Adminstrator with this error message \n' + JSON.stringify(err));
                        fileInsertionCounter++;
                        //if all the files are uploaded then show the zone div.
                        if(fileInsertionCounter == files.length){
                            showDiv();
                        }
                    });
                };
            })(files[i],jQuery("div.bRelatedList[id$=RelatedNoteList]").attr('id').split('_')[0]);
            
            reader.readAsDataURL(files[i]);
        }
}

/*Inserts the input file button. This is hidden from the view.*/
function insertButton() {
    if(jQuery("#multiUploadButton").length == 0) {
        var attachmentDiv = jQuery("div.bRelatedList[id$=RelatedNoteList]");
        attachmentDiv.find("div.pbHeader").find("td.pbButton").append(
            jQuery("<input>",
                {id     : "multiUploadButton",
                value   : "Multiple Upload",
                type    : "file",
                multiple: "multiple",
                style   : "display:none;"
                })
        );
        jQuery("#multiUploadButton").on('change',function(){
            processFiles(document.getElementById("multiUploadButton").files);
        });
    }
}

function hideDiv() {
    jQuery("#uploadingMessage").show();
    jQuery("#clickHere").hide();
}

function showDiv() {

    jQuery("#uploadingMessage").hide();
    jQuery("#clickHere").show();
    RelatedList.get(jQuery("div.bRelatedList[id$=RelatedNoteList]").attr('id')).showXMore(30);
}

function getCookie(c_name){
    var i,x,y,ARRcookies=document.cookie.split(";");
    for (i=0;i<ARRcookies.length;i++){
        x=ARRcookies[i].substr(0,ARRcookies[i].indexOf("="));
        y=ARRcookies[i].substr(ARRcookies[i].indexOf("=")+1);
        x=x.replace(/^\s+|\s+$/g,"");
        if (x==c_name){
            return unescape(y);
        }
    }
}

2) Create a custom link called ‘Attacher Custom Link’ and the add the below code. Make sure you select Behaviour as ‘Execute Javascript’.

  {!REQUIRESCRIPT('//code.jquery.com/jquery-2.1.3.min.js')}
  {!REQUIRESCRIPT('/resource/attachmentjs')}

… follow all the steps mentioned above and you should be set. I have tested this on Latest Chrome and firefox browsers and this is working just fine.