9/29/18

Enable a SharePoint REST API Post with RequestDigest Token

 SharePoint provides a very detailed set of RESTful API which allows us to interact with SharePoint data lists. When sending GET API calls, there is no need for additional security validation. When sending a PATCH (update) or POST (create), SharePoint requires that the request headers have an additional header with an authorization token. In this article, we take a look at getting this authorization token, and sending a request using an Angular application.

How to Get the Token

Before sending a POST requests via the APIs, we need to first get a fresh (valid) token.  Lucky for us, the SharePoint APIs also provide an end-point which can be used to get it.  Let’s take a look at the code below to see how this is done:

Note: This snippet runs under the SharePoint site context. The API URL is relative to the site location. For example if your site URL is https://mysp.com/sites/mysite, the API URL should be https://mysp.com/sites/mysite/_api/


function token() {
                     
    var url = "../_api/contextinfo";

    $http({
        method: 'POST',
        url: url,
        headers: {
            'Content-Type': 'application/json;odata=verbose',
            'Accept': 'application/json;odata=verbose'
        }
    }).then(function success(resp) {
        var data = resp.data.d.GetContextWebInformation;               
        authToken = {};
        authToken.name = 'X-RequestDigest';
        authToken.value = data['FormDigestValue'];
               
    }, function error(resp) {
        console.log(resp);
    });           
           
}


In this function, we use the API _api/contextinfo which returns a base64 encoded string.  The token also has an expiration of usually about ten minutes which depends on the SharePoint configuration.  Once the promise is resolved, we capture the X-RequestDigest JSON value, and we set it to a variable which can enable us to use it when making other API calls.

The JSON from the API call should look like this:


{
    "d": {
        "GetContextWebInformation": {
            "__metadata": {
                "type": "SP.ContextWebInformation"
            },
            "FormDigestTimeoutSeconds": 1800,
            "FormDigestValue": "",          
            "SiteFullUrl": "",           
            "WebFullUrl": ""
        }
    }
}


Once the authorization/digest token is available, we can send a POST API call with the token value in the request header. This is done in the following code snippet:


function addItem(item) {

    var data = {
        "__metadata": {
            "type": "SP.Data.TodoItemsListItem"
        },
        "Title": item.title,
        "OData__Comments": item.comments
    }

    var request = $http({
        method: 'POST',
        url: url,
        headers: {
            'Content-Type': 'application/json;odata=verbose',
            'Accept': 'application/json;odata=verbose',
            'X-RequestDigest': authToken.value
        },
        data: JSON.stringify(data)
    });

    return request;
}


When creating or updating information on the data lists, we need to send the item values as well as the metadata information for the list. Without the metadata, the request will fail.   We can identify the metadata information by first sending a GET request. The returning payload provides the data with the corresponding metadata.

In the rest of the code, we set the title and comments properties of the JSON payload. We then use the HTTP service to send a POST request with the header information. We should notice that there is an X-RequestDigest header entry which matches the name that we received when we initially get the token in the previous snippet. In this header, we can then set the security token value and send the request.

By adding the digest token to the header, the PATCH and POST API calls should be successful. We do need to remember that these tokens have an expiration window, so we should check for this and refresh the token when it is about to expire.

I hope this is able to help you resolve the authorization token requirements when creating and updating a SharePoint data list.

You can get a sample project here:  https://github.com/ozkary/sp-addin-todo


Originally published by ozkary.com

9/15/18

Managing SSIS Catalog Database Space

SQL Server Integration Services (SSIS) is a great tool for building ETL processes.  On SQL Server, we can configure an integration services catalog to deploy and configure all the SSIS packages. When using this service, there is also a catalog database named SSISDB that needs to be maintained before we quickly run of disk space. In this article, we look at the catalog retention policy to better manage the disk space requirements on the SSISDB database.

SSISDB Database

The SSISDB is a database that holds all the integration service catalog metadata as well as version and execution history. Depending on the frequency of execution of our packages, the database size can grow very quickly.

Fortunately for us, there is a SSIS Server Maintenance Job (SQL Server Agent jobs) that runs every day to clean up and maintain the database.  The problem with that job is that it depends on some configuration to enable the cleanup and the retention period which can be 365 days. Depending on our package activity, that retention window can lead our database space to grow in the hundreds of gigabytes.

Catalog Configuration

The SSISDB has a catalog schema which contains the objects that can enable us to look at the catalog configuration. We need to look at the following objects to view and update the configuration:

Object
Description
catalog.catalog_properties  (view)
 This is a view for the catalog configuration.

catalog.configure_catalog    (procedure)
 This stored procedure is used to update a configuration setting.


When selecting the information from the view, we may get results to similar to the ones on this image:


When we query the view, we need to look at these two settings:

Setting
Description
OPERATION_CLEANUP_ENABLED
This should be set to TRUE to enable the cleanup of historical data.

RETENTION_WINDOW
This is the amount of dates that are allowed for data retention.   If this data is not critical, set it to a low number like 30 days or less.


Change the Settings

To enable this setting and set a low retention window, we can use a stored procedure within the catalog schema. Let’s take a look at how that can be done with TSQL:


--SET CLEANUP ENABLE TO TRUE
exec [catalog].configure_catalog OPERATION_CLEANUP_ENABLED, TRUE

--SET THE RETENTION WINDOW TO 30
exec [catalog].configure_catalog RETENTION_WINDOW, 30

--OPTIONAL RUN THE CLEANUP ON DEMAND OR WAIT FOR SCHEDULE TASK TO RUN
EXEC [SSISDB].[internal].[cleanup_server_retention_window]


By setting those fields, we can run the stored procedure to clean up the data on demand, or we could also wait for the SQL job to run at its scheduled time and clean up the data for us.

Database Size is Hundreds of Gigs

In the event that the database is big, changing the retention window to a very low number (i.e. 365 days to 30 days) in one step may cause the job to eventually fail.  For these cases, we need to decrease the retention window in smaller steps.  For example, we could write a script which decrements the retention window by one and runs the cleanup procedure as shown here:


--we reduce the retention window by one until we reach the target window of 30
declare @index int = 364, @max int = 30

while @index > @max
begin

exec [catalog].configure_catalog RETENTION_WINDOW, @index

EXEC [SSISDB].[internal].[cleanup_server_retention_window]

--shrink the log file as well

DBCC SHRINKFILE('log',10)
set @index = @index -1

end



If the amount of data is very large, this script may take some time to run. Just let it run and monitor how the retention window decreases with every cycle.

Conclusion

SSISDB like any other database needs to be maintained. Depending on the activity of our SSIS packages, we need to be mindful of the maintenance plan for this database.  We need to look at the catalog retention policy to make it is compliant with our disk space capacity.

Thanks for reading.


Originally published by ozkary.com

8/25/18

Upload a SharePoint Sandbox solution to Office 365

A Windows SharePoint (WSP) solution is a cab file that enables us to deploy data lists and web content like JavaScript, CSS, ASPX assets to a SharePoint site. It is a sandbox solution because it only enables us to deploy assets that can run on the browsers. When deploying to Office 365, we need to deploy the WSP file to a site that has the solutions feature available. In this article, we take a look at creating a developer site, uploading a solution and then enabling the feature that contains the assets that we need to deploy.

Creating a Developer Site

To create a developer site, we need to visit the SharePoint Admin center. This is available from the Admin->Admin Centers option on Office 365.  We want to be able to visit the Site Collection Management area and create a new site collection. We can access that area directly by vising this link:


https://ozkary.sharepoint.com/_layouts/15/online/SiteCollections.aspx


Note: Replace ozkary with the corresponding site  tenant name.

On the site collection page, we can press the New button and create a Developer Site collection. In our example, we name or site collection dev which create URL similar to sites/dev. We can use any meaningful name.



https://ozkary.sharepoint.com/sites/dev




Upload the Solution

Now that we have a developer site collection, we can upload the WSP solution. This is done by visiting the developer site collection that we created on the previous section.  We can now select Site Settings-> Web Designer Galleries > Solutions or just visit this link:


https://ozkary.sharepoint.com/sites/dev/_catalogs/solutions


We are now ready to upload and activate our solution by pressing the Upload button and selecting our solution file. Once uploaded, the Activate button becomes enable. We just need to press it to activate the solution.

Enable the Feature

All SharePoint solutions have components or features that need to be enabled independently.  Once the solution is active, the features become available either at the Site or Site Collection level. This depends on how the scope of the feature is set.

When set teh scope to Web, the feature is installed at the site level. When set the scope to Site, the feature is available for the entire site collection.  For our example, we have a site scope feature, so we need to enable it from Site Settings -> Site Actions -> Manage Site features.  We could also just visit this direct link:


https://ozkary.sharepoint.com/sites/dev/_layouts/15/ManageFeatures.aspx


We should be able to find the feature name and press the Activate button.

Checking the Content

Now that we have activated the feature, all the content in our solution feature should be deployed. To validate, we need to visit Site Contents menu option. This should display the new data lists and other content that was deployed.

I hope this is able to provide an overview on how to upload a solution to Office 365.

Thanks for reading.

Originally published by ozkary.com

8/11/18

Parsing JSON with SQL Server

With SQL Server JSON support, we can now store JSON documents in a table. This is very similar to how we have been able to store XML documents in an Xml Column. In this article, we take a look at parsing a complex JSON document with nested structures to return a flat structure similar to a database query.

Define the JSON Document

We start by defining our JSON structure. The data may not make much sense, but the intend here is to be able to flatten this structure into a table format.  We want to be able to return every player name with the team and associated score for that team. Teams and scores are mapped by the index number on the array.  It may not be clear, but there are some challenges here. 


DECLARE @json nvarchar(1000) =
N'[
    {      
        "player":{"name":"ozkary"},
        "scores":[7,9],
              "teams": [
                     {"name":"team one"},{"name":"team two"} 
              ]        
    },
    {      
        "player":{"name":"dani"},
        "scores":[6,10] ,
              "teams": [
                     {"name":"team a"},{"name":"team b"}     
              ] 
    }
 ]


For example, how can we get the name properties for both player and team’s fields? Let’s look at those fields, and we can see that one is an object. The other is an array of objects.  In addition, the scores are stored in an array of values with no correlation to anything else.

We can try to parse this data to see how it looks once it is flatten.  We start by querying an object property. This can be done by providing the document path to that property. In this case, we can get the player name with this path:


'$.player.name'


The dollar sign ($) provides the root scope of the document. From there, we walk the properties down the different levels.

When it comes to arrays, we need to do a cross apply with the array and return the data as JSON. This enables us to join with that document section and parse it. We continue to repeat that process until we get to the object of each array element. This is shown in the example below, we return the teams property as JSON, and we then cross apply on that field to select the name.


teams nvarchar(max) '$.teams' AS JSON,
scores nvarchar(max) '$.scores' AS JSON

CROSS APPLY OPENJSON(teams) WITH (
    team nvarchar(50) '$.name'
)


Now that we understand the approach, we can implement a  solution that can help us get the information. The entire query should look as follows:


DECLARE @json nvarchar(2500) =
N'[
    {      
        "player":{"name":"ozkary"},
        "scores":[7,9],
              "teams": [
                     {"name":"team one"},{"name":"team two"} 
              ]         
    },
    {      
        "player":{"name":"dani"},
        "scores":[6,10] ,
              "teams": [
                     {"name":"team a"},{"name":"team b"}     
              ] 
    }
 ]'SELECT
    player.name, team, score
FROM OPENJSON (@json) WITH(  
    name nvarchar(50) '$.player.name',
    teams nvarchar(max) '$.teams' AS JSON,
       scores nvarchar(max) '$.scores' AS JSON
) as player
CROSS APPLY OPENJSON(teams) WITH (
    team nvarchar(50) '$.name'
)CROSS APPLY OPENJSON(scores) WITH (
    score nvarchar(50) '$'
)


We first open the entire JSON document with the OPENJSON function.  We select all the fields we need using the WITH directive. For every field that we return as JSON, we do a cross apply and open that JSON segment.  This enables us to select the object properties that we need. 

The resulting data should look as show below:


As we shown here, processing JSON documents with SQL Server is feasible, but we need to be mindful that JSON structures are defined to match an application model and attempting to query the data as a table structure may be a difficult task.

Thanks for reading.


Originally published by ozkary.com

7/28/18

ASP.NET MVC Apps on Virtual Directory with IIS Express

On previous articles, we learned how to deploy multiple ASP.NET MVC Apps on the same Azure Web App by using virtual applications.  We also learned that for some cases when more than one application defines the same routes, this may lead to an ambiguous routing request if not configured properly.

Previous Articles





In this article, we learn how to configure our development environment with a virtual directory and have a second app run on the same process which should simulate the environment on Azure.

IIS Express Configuration

Visual Studio Solutions contain a .vs folder with solution configuration information. In that folder, we can find a config folder with an applicationhost.config file.  This is the file that enables us to configure IIS Express when running apps from Visual Studio.

When we open the file, we should look for the sites node (xml node). This is where the sites/apps definitions can be found for a solution. In the case of a solution with two ASP.NET projects, we can find a setting similar to this:


<site name="ozkary.azure.vdir.main" id="2">

    <application path="/" applicationPool="ozkary.azure.vldir.main AppPool">

        <virtualDirectory path="/" physicalPath="d:\repos\ozkary.vdir\main" />

    </application>

    <bindings>

        <binding protocol="http" bindingInformation="*:61823:localhost" />

    </bindings>
</site>

<site name="ozkary.azure.vdir.admin" id="3">

    <application path="/" applicationPool="ozkary.azure.vdir.admin AppPool">

        <virtualDirectory path="/" physicalPath="d:\repos\ozkary.vdir\admin" />

    </application>

    <bindings>

        <binding protocol="http" bindingInformation="*:62029:localhost" />

    </bindings>
</site>



In the settings, there are two sites (site node), main and admin.  Both of those sites run from a different local folder and a different port. If we translate this to an Azure deployment, we will need to deploy to two different web apps.

Our goal is to change this setting to only use one app, and deploy the admin site as a virtual app under the main site.  To do this using IIS Express, we need to configure the main app setting to read the following:


<site name="ozkary.azure.vdir.main" id="2">

<application path="/" applicationPool="Clr4IntegratedAppPool">
           <virtualDirectory path="/"    physicalPath="d:\repos\ozkary.vdir\main" />

</application>

<application path="/admin" applicationPool="Clr4IntegratedAppPool">
          <virtualDirectory path="/" physicalPath="d:\repos\ozkary.vdir\admin" />

</application>


<bindings>
        <binding protocol="http" bindingInformation="*:61823:localhost" />

</bindings>

</site>


To review, we just add another application setting under the same site node configuration. We need to be careful in setting the path information otherwise this can lead to errors. We set the new application node path attribute to the virtual directory name (/admin). We then set the virtualDirectory node attribute path to the root of the second project which should have a different physical path.  This essentially is the same as if we would do this on an IIS Server. 

Validate the configuration:

To validate that our configuration is working properly, we can take a look at the process that IIS Express is creating for us. We first take a snapshot of the process prior to making the virtual directory entry. If we run the projects, we would see that both projects are running with a different process ids,  PID. This is shown on this image below which is taken from the IIS Express admin app which is available from the system tray.


We can then stop the applications and add the additional application node under the main site.  We are now ready to lunch the applications again and take another snapshot.  We should now see that both applications are running under the same process id PID 25860.




After validating this locally, you can deploy to Azure and validate that this is working with no conflicts. To learn how to deploy to Azure using a virtual directory, review the article below:


  


Hope this is helpful and thanks for reading.

Originally published by ozkary.com