Alpine Valley Clinic

Introduction

Due to a reduction in fund deposit and a consequent decrease in funds available for loans, First Bank required Alpine Village Clinic to prepare an estimation of its semi-annual credit line and borrowing. Alpine had not taken any initiative to develop such a budget. It was very challenging to make the cash budget breakdown for the first time through applying sensitivity analysis in finding the clinic’s financial borrowing capacity up to mid-2010 from the time the report was requested.

The management of Alpine Clinic had ignored the critical aspect of budgeting for cash despite the growth and expansion of its credit value. It was a hard task for Alpine Clinic to come up with an informative cash budget report in response to First Bank’s request given that the clinic’s management had not been maintaining proper records of its cash flow for quite a lengthy period (Alexander, 2012). In regards to the challenges experienced by Alpine Clinic management, it is therefore essential for the clinic to maintain a daily cash budget.

This paper looks at the problems experienced in Alpine Clinic finance department, the alternatives that were available for the clinic, the feasible solutions, and suggests recommendations for future consideration.

Background Information

Alpine Village Clinic is located in winter hotel in the suburbs of Aspen city, Colorado. Though the clinic is open throughout the year, its business is majorly seasonal due to varying number of patients in different months of the year. The clinic’s high peek falls in winter from December to March during the period of skiing when most injuries occur. At some point, the doctors contemplated shutting down the clinic during summer but operating the clinic selectively for a specific period in the year proved to be inefficient. Nonetheless, there seemed to be reasonable demand during the summer period that justified the clinic’s continuous operation through the year. Two doctors; Doctor James Peterson and Doctor Amanda Cook who served as an orthopedist and financial officer respectively ran the clinic.

First Bank, which is the clinic’s primary lender, had forecasted a decrease in deposits made in the bank. The bank, therefore, asked its credit customers to provide an estimate of their borrowing needs for the first half of 2010. Dr. Cook requested Doug to prepare an approximation of the clinic’s line of credit to present at the meeting (Gapenski, 2009).

A line of credit refers to a short-term agreement for a loan through which a bank accepts to lend firms or businesses some quantified maximum sum of money. Through that agreement, certified firms can borrow based on their credit line worthiness. Upon the expiry of a line, it can be renegotiated for as long as the business still finds the credit funds useful to its course. Payment of the borrowed amount can be made at any time, but an outstanding amount should be cleared upon expiration. Interest accrues daily basis on the sum is drawn down. Also, there is a commitment fee that should be paid up front to safeguard borrowing worthiness of the line.

Problems

The underlying question that anyone would ask after reviewing the state of cash budgeting at Alpine Clinic is whether the management would still proceed to request for a credit line in the period given by the First Bank. If indeed the clinic was going to make such a request, then how much would be their line of credit estimation given that no cash budget existed? In reality, the clinic was going to request a line of credit for the specified period amounting to about USD200, 000 since they would be in a cash deficit of USD 171,752 from January through March.

Also, the outstanding loan amount by the time First Bank requested for the cash budgeting outline was totaling to $500,000 meaning that if the billing fell by ten percent, then it would have covered the expenses. However, if it went down by approximately 20%, then neither the standing line of credit nor the recommended loan figure would have included the costs.

Another problem that arose with First Bank’s request is that monthly cash budget could not have revealed the full stretch of borrowing needs that required to be met. To find out whether her concerns were valid, Doctor Cook proposed that Doug prepares daily cash estimation for January as a test case scenario. In reality, the daily billing forecast was to be estimated as one divided by the number of days in January and then multiplied by the month’s billing. However, this would not have been possible because the cash budget model had no provision for either the interest paid on the line of credit borrowing or the interest accrued on the cash surpluses. Faced with such a challenge, Dr. Cook proposed a modification to be done on the monthly cash budget to accommodate the missing links.

The modification would be based on the assumption that the maximum required loan of USD 48,250 is usually tabulated on the cash budget monthly whereas a maximum credit of $150,024 would appear on the daily budget as from 15th January since the cash expenditures on 1st and 15th. It is therefore evident that the monthly budget could not have revealed the full stretch on borrowing because it would be less significant as the loan balance appearing on the daily budget depreciates towards USD 48,250 (Gapenski & Pink, 2014).

The management was fully aware of the impact the modification on the financial estimation would have on the clinic’s line of credit. Usually, the cash budget is a prediction of the expected values rather than figures known with a high level of certainty. Therefore, it was challenging to apply the actual patients’ billings, and the deposits since a slight variation in the forecasted amount would result in misleading forecasted surpluses as well as deficits. Knowing how the different changes in the basic assumptions would influence the predicted surplus or deficit was of much interest at the given state of the clinic’s cash budget. Doug was, therefore, working on a financial estimation that will most likely leave the clinic’s future borrowing capacity on a very awkward situation of poor predictability.

Additionally, it was apparent that allowing Doug to use the actual figures of patients’ billings and deposits would result in a no bad debt losses being created in the cash budget required by the First Bank. A bad debt loss occurs when a rendered service is not paid for a specified period thus resulting in a delayed collection. Alpine Village Clinic operated in such a way that the earliest payment made by clients came thirty days after the rendered service and not later than sixty days. How would the clinic proceed in presenting a cash budget without bad debt losses when they existed? Therefore, using the actual billings and collections to predict the clinic’s first half of 2010 line of credit was merely a bad debt expense. The outcome of this scenario did not give a true reflection of the clinic’s financial forecasting hence creating a loophole in its future borrowing capacity (Dorrell & Gadawski, 2012).

Another issue with Alpine Village Clinic was a target cash balance without compensating balance. Although the target cash balance was grounded on First Bank’s recompensing balance requirement, the loan for the term was to be settled in September 2010. The problem, therefore, arose with setting the clinic’s target cash balance where the compensating balance was not required.

Another puzzle in estimating the cash budget is that the patient volume is expected to rise over the forecasting period. This situation most likely would result in surpluses, which can lead to miscellaneous expenses. According to the available financial projection, it was visible that the billings would fall below the estimated level. In a situation where the billing estimate was projected at 90%, there would be an increase in the deficit from January through March, with a consequent reduction in the surplus from April through June. However, if the collections were to be stretched out by say 10%-to-20%-to-70%, the deficit would shoot, but the excess would reduce.

Alternatives and Reasons for Rejection

Given the problems that Alpine Clinic underwent in preparing the cash budget, some other options could have been considered under the situation. First, Dr. Cook could have regarded as relying on the previous years’ estimation to predict the current situation. This could have offered firsthand information on how the clinic’s credit line has been functioning.

However, using records to serve the current scenario proved to be challenging and messy in the sense that the number of patients’ inflow is not always a constant figure even during high peek period between December to March. Using daily cash budget could have been the most helpful point of reference, but none existed since the management did not maintain any. Therefore, relying on the old records could have misled the clinic’s management on arriving at the appropriate cash budget for the required period.

Another alternative was to use the billings made in the past to mirror the collections to be made and use that information to prepare a cash budget. Using this scenario, Doug who was tasked with the preparation of the cash estimate would follow the clinic’s cash collection experiences over the last period and convert the figures into cash collections for actual medical services offered. This information would be critical for the management to present to the bank to prove the clinic’s status of future borrowing capacity.

Nonetheless, this option was mischievous since the clinic’s total billings do not map directly on the cash collections for a fixed period. The problem, in this case, is that First Bank had made the point clear that it needed a financial projection for the first half of the year 2010. However, the billings information available as of September 2009 was not going to serve as a full reflection of the clinic’s cash collection for the beginning of the next year. In practice, about 20% of the patients served by the clinic make immediate payment for the services rendered to them. The remaining part of the billings is typically settled by the third-party payers, who give the next 20% within thirty days and then rest which is 60% within sixty days from the time the service was received (Gapenski, 2009).

Possible Solution and a reason for choosing the solutions

Dr. Cook’s main worry was the possibility of having surplus funds from the cash budget approximation derived from modified records. The management was concerned with the best way to utilize such surpluses should they arise. The forecasted period usually attracts summer visitors, which consequently make the months’ operations financially attractive hence opening more doors for surplus funds (Gapenski, 2012).

However, the management looked into the matter of possible surplus as an opportunity rather than a problem. The surplus funds could be invested in cash regenerating activities for the clinic. Because the summer attracts several clients the idea of expanding the clinic’s operation to include hospitality services like accommodations to skiing enthusiasts apart from only serving their medical needs.

This option was considered appropriate by the management because it would be a proper ground setting of becoming financially independent. With this venture, the clinic would maximally benefit from the summer clients surges and keep more funds to plan for its next high peek financial needs without entirely depending on the bank’s lending that had already shown a diminishing trend. Also, the summer clients would get a satisfying array of services at the clinic. Most of the clients with minor requirements such as massage therapies would have their accommodation needs satisfied at the clinic’s premises. Consequently, Alpine Clinic would be maintaining an internal environmental managerial condition with minimal external forces.

Conclusion

Reflecting on the circumstance under which Alpine operated to meet its financier’s needs, it is prudent that the practice of maintaining a daily cash budget be implemented. It is more comfortable and accurate to make financial projections based on correct information that depicts the actual daily expenses. Making economic estimations on modified records can cause over budgeting or underestimation, where in both cases the mistakes may lead to mismanagement.

Recommendations

  • Based on the available information, a situation where no commitment fee is needed, a large credit line of about $400,000 would be most appropriate for the clinic.

  • However, if a charge is put on the magnitude credit line, then a smaller range would be the best option. About $200, 000 is recommended since it takes care of the 20% amount below the projection.

  • Other considerations include getting an estimation of reducing cost in the event where utilization goes down. Also, the management should consider receiving credit from alternative sources as well as learn to tolerate risks.

References

Alexander, D. (2012). Principles of Emergency Planning and Management. Edinburgh Portland: Dunedin Academic PressInternational Specialized Book Services distributor.

Dorrell, D. & Gadawski, G. (2012). Financial forensics body of knowledge. Hoboken, N.J: Wiley.

Gapenski, L. & Pink, G. (2014). Cases in healthcare finance. Chicago Arlington, VA: Health Administration Press AUPHA Press.

Gapenski, L. (2012). Healthcare finance : an introduction to accounting and financial management. Chicago, Ill: Health Administration Press.

Gapenski, L. (2009). Fundamentals of healthcare finance. Chicago, Illinois Arlington, Virginia: Health Administration Press Association of University Programs in Health Administration.

Simulations

Simulations

In this study, simulations will be done using MATLAM Simulink simulator. In order for the simulation process to be successful, all simulation properties should be established. The simulations performed in the paper will have one main parameter. The reason behind this is to ensure that the results obtained are easy to understand and form conclusions regarding the study. The goal of this study is to compare IC techniques as far as suppression of interference in NOMA cellular networks is concerned. The main parameter in this study will be time interval when a cell station instructs user terminals to quit transmitting signals so that the only signals that will be received will be interfering signal and noise.

There are three main IC techniques that will be considered in the simulation. These are genetic algorithm technique, simulated annealing technique and the other technique where frequency agile band pass filters are placed after transmitters and in front of receivers to help in suppression of interference. Different simulations will be made for the three techniques in a common. The reason for this is to ensure that all techniques are assessed under same condition in order to ensure that the results obtained are free from bias.

The goal of the simulations will be to assess whether adaptive filters used by different techniques are able to update their coefficients in order to supress interference. One of the key variables that will be used in the simulations will be the error. The formula below will be used to calculate error in the simulation:

Usually, the goal of every technique is to place the above defined error to zero. However, there are inefficiencies in every technique that prevents the value of the error from being zero. There inefficiencies are what cause the differences that exist between these techniques from achieving the same results. The nature of adaptive algorithm for every technique is what causes the differences in performance for the techniques. The magnitude of error will be established and the main simulation will be based on relative power against taps number. Another key variable in the simulation will be interference to noise ratio calculated as follows:

Genetic algorithm technique

Simulated annealing technique

Frequency agile band pass filters placed after transmitters and in front of receivers

Error Recovery Protocols in Satellite Transmission

Introduction

The essence of communication remains a critical aspect in the modern society. Different technologies are emerging every single day that are geared to make it cheaper and efficient. The end users are presented with a variety of communication mechanisms for the different forms of communications already existing. In the midst of all these technology is the aspect of communication accuracy. The accuracy is defined by the level of distortion every communication channel can introduce to the original message (Hong et al., 2014).

The aspect of accuracy is considered to not only affect the reliability of the communication channel but also increases the communication cost. Different communication channels have different methods of ensuring highest level of communication accuracy. Further, there are many algorithms that have been developed to facilitate accurate delivery of the sender message. It can, however, be noted that the accuracy of the message is affected by a number of factors that are channel related. The channel is considered to introduce considerable amount of noise which has the potential of distorting the original message. Most of the existing algorithms are meant to evaluate and compare the original message against the delivered message. Any slight deviation from the expected standard automatically prompts for the retransmission of the original message. The mode of retransmission and the decision of retransmitting the message is normally made and evaluated on the basis of the packets delivered and their sequence (Hong et al., 2014).

Retransmission of the message over the satellite communication is more or less the same scenario. In this case the satellite is perceived to be part of the communication channel. As such, there is sufficient probability of noise combinations of the data signal. Point to multipoint communication over the satellite is perceived to have relatively high chances of noise distortion as a result of the impending confusion on the information flow. The directional flow of the information is critical in the satellite communication since it defines the accuracy and reliability of the transmission channel (Tsuda and Ha, 2009).

Satellite are considered to be among very successful transmission channels for digital broadcast. It is further perceived that the satellite transmission potential has not been explored fully. This paper provided a discussion on the retransmission aspect that is observed on the point to multipoint satellite communication system. The paper will explore the transmission errors that are generated in this mode of communication. Further, discussion on the existing error correction mechanisms will also be discussion. The paper will pay special attention on the go-back-n error recovery strategy. The strategy will be simulated on the Matlab software in order to evaluate the underlying efficiency and opportunities for improvement (Towsley, 2009).

Aim

The paper intends to explore the error correction strategies that are existing on the satellite communications. Satellite communication remains to be critical channel for relaying digital broadcast information across the globe. The paper intends to provide a discussion on the retransmission aspect that is observed on the point to multipoint satellite communication system. The paper will additionally explore the transmission errors that are generated in this mode of communication. Further, discussion on the existing error correction mechanisms forms part of the objectives of this paper. At the end of the day a simulation on go-back-n error recovery strategy will be undertaken. The strategy will be simulated on the Matlab software in order to evaluate the underlying efficiency and discover opportunities for improvement.

Significant studies

Many studies on the satellite communication system has been undertaken in the recent past. The studies have realized different communications systems that are currently implemented. Significant efforts have also been directed toward improving the already existing satellite communication systems. The studies have made it possible for the satellite digital broadcast possible and also introduced the asymmetric nature of the IP traffic associated with the satellite communication systems. At the moment satellite is considered to be the most reliable IP multicast communication provider with high level of reliability (Singh and Chandran, 2012).

Perhaps the study on the retransmission aspect of the satellite communication is an aspect that has gained popularity in most of the recent studies. The retransmission aspect has been regarded as the only solution for the transmission errors realized in the satellite communications (Singh and Chandran, 2012). The following figure can be used to illustrate the point-multipoint communication over the satellite;

Figure 1 Point-to-multipoint transmission over the satellite

The above communication scheme can either be designed for single or bi-directional systems where the information is relayed from a central antenna to an array of antennas by use of time-division multiplexing. In this communication scheme same message is delivered to an array of receiving antennas at a very high speed. As such, it is preferred for establishing real-time communication in the teleconferences systems. This is considered a unique mode of communication since the central connection endpoint is expected to establish a specific link for transmission to multiple remote peripherals. Any data originating from the central antenna is received by all the peripheral remote antennas at their dedicated sessions. It is then becomes clear that the idea of retransmission of the signal is complex since the central antenna has to establish the specific message transmitted at a given time to the specific remote peripheral. There are several algorithms that have been developed to establish the retransmission mechanism whenever there is such a need (Singh and Chandran, 2012).

There exist three modes of communication over the satellite under the IP multicast; unicast, multicast and broadcast. The unicast form of communication is established between a single antenna against a specific receiver. The receiving antenna will acknowledge received of the message when the transmission has been completed. Multicast transmission is established when the central connection endpoint establishes a specific link for transmission to multiple remote peripherals. Any data originating from the central antenna is received by all the peripheral remote antennas at their dedicated sessions when the connection has been established. The broadcast mode of connection does not have a specific receiving antenna. As such, there is a single transmitting antenna that sends data to unknown receiving antennas. The receiving antennas will only be required to request for access to the transmitted data using a specific criteria. Once the access permission has been granted the receiving antenna will continue to receive the data as long as the transmitting antenna continues to transmit (Singh and Chandran, 2012).

Scope of the studies

Most of the studies are considered to focus on the different communication aspects that are available with the satellite communication. There is little that has been explored in terms of the transmission challenges and the efficiency of the whole communication system. Perhaps the errors introduced during the transmission session have realized insignificant attention. It can however be noted that satellite communication just like any other form of communications suffers the drawbacks of transmission errors. The communication channel is associated with the huge transmission packets thus making it difficult to rectify the flow of information at the lowest level. As such, there remains to be a challenge of determining the correctness and reliability of the transmission channel (Singh and Chandran, 2012).

It can also be noted that point-to-multpoint transmission has the ability of self-correction when a comparison on the received data at the receiving antennas is established. The receiving antennas are expected to receive similar information. An establishment of what was received at any given moment for every single receiving antenna can easily determine and correct occurrence of transmission errors. This mode of error correction however is not reliable since the receiving destinations are normally far distance apart. Establishing an error correction mechanism requires a dedication connection and evaluation algorithm that will monitor the content of each packet received on all the receiving end points. It is therefore critical to explore other mechanisms that can detect and correct the errors generated in the point-to-multipoint communication over the satellite. This is not only a gap in the satellite communication literature but also a challenge to the telecommunication industry (Singh and Chandran, 2012).

There is need to pay attention to the error recovery mechanism that can improve the reliability of the satellite communication systems. There exists a couple of error recovery mechanisms that have not been fully explored. The use of the go-back-n recovery mechanisms has gained popularity as a result of the simplified nature of recovering lost transmission packets. Additional review on this recovery mechanism has been provided in the literature review section (Selig, n.d.).

Literature review

Satellite communication constitutes three main error detection and correction mechanisms. The use of any of these mechanisms depends on the number of nodes where the transmission is undertaken, the nature and form of data being transmitted and the mode of transmission used. Automatic repeat request is among the common mechanism that is employed in the satellite communication. This approach basically request for the retransmission of the signal that is considered to have been lost or distorted by the transmission channel. Automatic repeat request make use of the error-detection code to acknowledge the reception of the transmitted data. The receiving antenna sends an acknowledgment signal indicating the delivery of the given transmitted data. However, when the transmitter fails to receive the acknowledgment signal from the receiver within a given set time it will automatically retransmit the signal. There are basically three modes of Automatic repeat request; Stop-and-wait Automatic repeat request, Go-Back-N Automatic repeat request and selective repeat (Selig, n.d.).

Stop-and-wait Automatic repeat request

This is considered to be the simplest error correction technique. This technique is considered to be adequate for simple communication protocols. Stop-and-wait Automatic repeat request constitutes transmitting of a protocol data unit (PDU) of information and then waiting for the response from the receiving antenna. The receiver in this case is expected to provide an acknowledgement PDU when it receives all the expected PDUs from the transmitting unit. The receiver can also provide a negative acknowledgement if it receives an incorrect PDU. The technique is designed with an ability of the transmitter to automatically retransmitting the signal when there is a delay in receiving a correct acknowledgment signal from the receiving unit (Selig, n.d.). The following figure can be used to illustrate the Stop-and-wait Automatic repeat request error correction mechanism;

Figure 2 Stop-and-wait error correction technique

It is important to note that the sender will always wait for the acknowledgement signal before transmitting another PDU. The sender is considered to be in the idle mode whenever it is waiting for the acknowledgment signal. In the above figure the blue allow points to the sequence of data PDU that is being transmitted cross the communication channel. The Stop-and-wait Automatic repeat request can implement either full or half duplex form of communication (Selig, n.d.). The green allow points to the acknowledgment signals that is sent by the receiver to the transmitter. There is normally a small delay that is witnessed between the reception of the last byte of PDU and the generation of the corresponding ACK. Perhaps the most important aspect of Stop-and-wait Automatic repeat request is the reliance on the time for the transmitter to note the loss of the data PDU. In this case the transmitter will wait for the acknowledgement signal within a specified time. The lack of the arrival of the acknowledgement signal within the expected time will prompt the transmitter to resent the data PDU again. It is important to note that the receiver does not have the capability of detecting data loss (Pujolle and Puigjaner, 2011). The transmitter will always receive an acknowledgement signal that has unique code to the sent signal. The transmitter will then forward the same data PDU over the transmission channel and wait again for the acknowledgement signal. The following figure can be used to illustrate the retransmission mode of the Stop-and-wait Automatic repeat request

.

Figure 3 Retransmission mechanism of the Stop-and-wait Automatic repeat request

In the above diagram the second data PDU is considered corrupt. As such, the receiver will discard it and then provide the negative acknowledgement signal to the sender. Alternatively, it will fail to provide an acknowledge signal. As such, the lack of the acknowledgement signal will prompt the transmitter to resend the data packet again over the channel. The efficiency of the stop and wait technique can be computed as shown below (Lam, 2009). Assuming S is the time taken between the transmission of the packet and the reception of its acknowledgment and DTP is the transmission time of the packet, the efficiency of the system can be illustrated as below;

Then,

And assuming TO is the timeout interval and X is the amount of time taken for the transmission of the data PDU and reception of the acknowledgment signal,

And

Go-Back-N automatic repeat request

Go-Back-N is an error recovery procedure that is preferred for point-to-multipoint in many communication protocols. Go-Back-N is an error recovery procedure is also applied on the satellite communication to ascertain the delivery of the data packets over the transmission channel. The procedure is able to detect and retransmit I-frames that have already been computed in the underlying algorithm. This technique relies on the sequence of the frames that are sent from the sender to the receiver. The receiver will continuously acknowledge the reception of a given number of data PDUs. Additionally, it will keep the record of the sequence of such data PDUs (Idawaty Ahmad. and Mohamed bin Othman., n.d.).

All the acknowledgement regarding the received data PDU will be sent to the transmitter. However, upon loss of a data PDU, the receiver will miss to acknowledge its reception. It will however continue accepting higher sequence number of the data PDUs keep coming from the transmitter. The transmitter will then automatically detect the missing acknowledgement of the lost PDU. At this point the receiver will request the transmitter to stop transmission and go back to the last known correctly sent and received data PDU. The transmitter will then be required to retransmit all the data PDUs starting from the sequence number of the lost PDU (Idawaty Ahmad. and Mohamed bin Othman., n.d.). The following figure can be used to illustrate the Go-Back-N automatic repeat request error corrective technique;

Figure 4Go-Back-N error recovery technique

It is important to note that there are three stages through which the corrupted PDU is recovered. In the first stage the corrupted PDU is discarded at the receiver’s end. The receiver only retains the sequence of the corrupted PDU. The sequence number of the corrupted PDU is then sent back to the transmitting node. The transmitter will then go back to the sequence number and retrieve the correct data PDU. At this point the receiver will continuously discard all the PDUs sent from the transmitter and do not contain the already requested sequence number. When the transmitter sends the correct PDU with the sequence number, it will then be received by the receiver. At this point the receiver will start accepting the higher sequence number of the other PDUs. It is thus clear that the transmission of the PDUs momentarily stops until the correct PDU is sent and then the sequence is proceeded. This technique is associated with delays emanating from the time lost when the corrupt PDU is received at the receiving node (Hong et al., 2014).

The detection of the corrupt data PDU introduces a transmission delay that is manifested by the time taken for the receiver to request the sender to retransmit the corrupted PDU. Further, the transmitter will be forced to restart sending the PDU from the last successfully received PDU. A buffer on the transmitter is activated to keep sequence of the remaining PDUs that have not been transmitted after the interruption (Hong et al., 2014). The following figure can be used to illustrate the retransmission mechanism that is provided by the go-back-n automatic repeat request;

Figure 5Retransmission mechanisms of the go-back-n automatic repeat request

The efficiency of this system can be computed and illustrated as shown below;

In this case the value of N is chosen in such a way that it is large enough to allow continuous transmission while waiting for the acknowledgment signal for the first packet of the window. As such,

Assuming there are no transmission errors, the efficiency of the system will be;

However when an error occurs, the entire window of N packets will have to be retransmitted as such;

Where X is the number of the packets sent per successful transmission.

Advantages of Go-Back-N automatic repeat request

  1. The level of efficiency is more than stop and wait protocol

  2. One acknowledgment signals can be used on more than one data PDUs

  3. It is possible to set the time on a group of data PDUs

  4. It is possible for the sender to send many data PDUs at the same time

Disadvantages of Go-Back-N automatic repeat request

  1. The Go-Back-N automatic repeat request has buffer requirements which may increase the cost of implementing the protocol

  2. The transmitter requires additional memory that is used to store the last N packets

  3. It introduces unnecessary retransmission of data PDUs that may be error free

  4. There is high inefficiency when there is a delay since large data will be required to be retransmitted

Selective repeat ARQ

Selective repeat ARQ is another error correction mechanism that is employed by the point-to-multipoint satellite communication. This technique is considered to be complex since it involves a set of procedures that provide an error recovery. However, it is considered to be the most efficient mechanism of correcting the transmission errors since it does involve the retransmission of the data PDU. The receiver is designed with the ability of correcting the transmission errors noted on the received data frames (Hercog, n.d.).

It can be pointed that both the sender and the receiver maintain a window of acceptable sequence numbers that help them determine the occurrence of transmission errors. The sender window normally starts at zero and increases to some pre-defined maximum number. The receiver on the other hand will always have a constant size that is equal to a given predetermined maximum. Additionally, the receiver has a buffer that is reserved for each sequence number in its fixed window. The following figure can be used to illustrate the mechanism of the selective repeat ARQ;

Figure 6 Selective repeat ARQ

Attached to the buffer at the receiver is a bit that helps to determine if it’s empty. The sequence number of the frame that arrives is normally checked to see if it falls within the window. If the frame falls within the expectations it is accepted and stored. The frame is normally kept in the link layer and not passed to the network layer up to the point where the lower numbered frames have all been delivered. As such, the sequence of the frames is ensured and sustained. It is also important to note that the sender will only transmit frames whose NAK has been received. As such the efficiency of this system is normally very high. It thus clearly demonstrates that there will be fewer retransmissions as compared to go-back-n technique. The only disadvantage with this system is the complexity involved between the sender and the receiver since each frame must be acknowledged individually. Additionally, the receiver is dedicated with the maintenance of the frame sequence (Hercog, n.d.). The probability of delay variance can be computed as illustrated below;

Let N be independent transmission processes that are defined by the matrix P and they start at the same time. All the processes are expected to complete in or before ZN. The probability that jth process will complete last can be defined as below;

As such it is clear that that the probability of the delay is related to the level of redundancy that should be added to the packet stream in order to meet the user’s daily constraints.This technique is therefore considered to be the most efficiency error recovery technique over the other two methods (Galinina, Balandin and Koucheryavy, n.d.).

The strength of the selective repeat ARQ lies on a number of aspects that are not possible on other ARQ protocols. Selective repeat ARQ is considered to be the most efficient since it does not involve retransmission of the data PDUs as other ARQ protocols. Although it is considered to be complex, the underlying algorithms ensure there is high assurance of accuracy and efficiency. The receiver under selective repeat is designed with the ability of correcting the transmission errors noted on the received data frames. The probability of the occurrence of the errors and the time delay of the recovery mechanism has been illustrated in the three methods presented above (Galinina, Balandin and Koucheryavy, n.d.).

It can further be pointed out that this error correction mechanism provides room for larger receiving window compared to the other two ARQs. As such, the buffers at the receiving end can comfortably store the out-of-order packets thereby avoiding the situation of retransmission (Galinina, Balandin and Koucheryavy, n.d.).

The other advantage of the selective repeat ARQ involves the buffered packets that are when they are out-of-order. The receiver has the capability of rearranging them and then sending the acknowledgement signal to the sender thus completing the transmission process. This is however not the case with the stop and wait and go-back-n protocols. It is thus possible for the receiver to re-acknowledge the already received out-of-order packets. This scenario however is not possible with the other two ARQ protocols (Galinina, Balandin and Koucheryavy, n.d.).

This technique is therefore considered to be the most efficiency error recovery technique over the other two methods (Bisio, 2016). The probability of the occurrence of the errors and the time delay of the recovery mechanism has been illustrated in the three methods presented above. A comparison on the efficiency of the three methodologies has also been provided. The following figure can be used to illustrate the comparison of Go-back-N and selective repeat automatic repeat request;

Figure 7 Comparison between selective repeat and Go-Back-N protocols

Forward Error Correction

The forward error correction is yet another technique that is used in satellite transmission. This technique involve the transmission of data PDU that are redundant. The sender purposefully sends redundant data PDU over the transmission channel. The receiver is then left with the option of selecting only those data PDUs that have not been corrupted over the transmission channel. In most cases this technique is preferred for sending broadcast form of communication where many receivers are expecting data from a single transmitter.

Forward error correction technique only work well if the errors occurring on the transmission channel are independent. As such, the correction of one error will not lead to modification of the error in the subsequent data PDUs. However, if there are many errors in the system this technique is found to be less efficient.

The forward error correction technique requires an intelligent transmitter that can introduce unique redundant bits using a predetermined algorithm. The longer sequence of bits introduced to the data PDU is then sent over the transmission channel. The receiver is then expected to use a suitable decoder to retrieve the original signal. The following figure can be used to illustrate the concept of the forward error correction technique;

Figure 8 Forward Error Correction Technique

Advantages

  1. It does not involve retransmission of signals whenever an error is detected on the data PDUs
  2. Redundant bits inserted to the original data makes it difficult for the data to be corrupted

Disadvantages

  1. Requires a special type of receiver and transmitter to embrace the algorithm
  2. Misuses transmission channel bandwidth to send redundant data which would have been used to send additional data

Limitation

  1. Requires special form of algorithm to introduce redundant bits
  2. Requires extra bandwidth for transmission of redundant data

Methodology

This section has provided a discussion on different approaches that are used to error recovery in point-to-multipoint satellite communication. A discussion on stop and wait, Go-Back-N and selective repeat ARQ has been provided below;

Stope and Wait Automatic Repeat Request

Stop-and-wait Automatic repeat request constitutes transmitting of a protocol data unit (PDU) of information and then waiting for the response from the receiving antenna. The receiver in this case is expected to provide an acknowledgement PDU when it receives all the expected PDUs from the transmitting unit. The receiver can also provide a negative acknowledgement if it receives an incorrect PDU. The technique is designed with an ability of the transmitter to automatically retransmitting the signal when there is a delay in receiving a correct acknowledgment signal from the receiving unit (Selig, n.d.).

Advantages

  1. It has incorporated a timer in the transmission process
  2. Very relevant for noisy channels
  3. It constitutes both flow and error recovery and control mechanisms

Disadvantages

  1. The efficiency of this technique is low
  2. No timer set for individual frames
  3. Only a single frame can be set at a time

Limitations

  1. This technique has only one window size
  2. There is no pipelining in stop and wait ARQ
  3. The receiver window is limited to one

Go-Back-N ARQ Protocol

This technique constitutes the retransmission of data PDU once an error has been detected. This technique relies on the sequence of the frames that are sent from the sender to the receiver. The receiver will continuously acknowledge the reception of a given number of data PDUs. Additionally, it will keep the record of the sequence of such data PDUs. All the acknowledgement regarding the received data PDU will be sent to the transmitter. However, upon loss of a data PDU, the receiver will miss to acknowledge its reception. It will however continue accepting higher sequence number of the data PDUs keep coming from the transmitter. The transmitter will then automatically detect the missing acknowledgement of the lost PDU. At this point the receiver will request the transmitter to stop transmission and go back to the last known correctly sent and received data PDU. The transmitter will then be required to retransmit all the data PDUs starting from the sequence number of the lost PDU (Idawaty Ahmad. and Mohamed bin Othman., n.d.)

Advantages of Go-Back-N automatic repeat request

  1. The level of efficiency is more than stop and wait protocol

  2. One acknowledgment signals can be used on more than one data PDUs

  3. It is possible to set the time on a group of data PDUs

  4. It is possible for the sender to send many data PDUs at the same time

Disadvantages of Go-Back-N automatic repeat request

  1. It introduces unnecessary retransmission of data PDUs that may be error free

  2. There is high inefficiency when there is a delay since large data will be required to be retransmitted

Limitations

  1. The Go-Back-N automatic repeat request has large buffer requirements

  2. Additional memory for storage of N packets is required

Selective Repeat ARQ

This is the approach that has been recommended for implementation in the error recovery for the point-multipoint satellite communication. In reference to the discussion that has been provided in the previous sections it is clear that this protocol has the highest efficiency with little chances of retransmission. The resources required to implement this protocol are also considered to be less. The strength of the selective repeat ARQ lies on a number of aspects that are not possible on other ARQ protocols. Selective repeat ARQ is considered to be the most efficient since it does not involve retransmission of the data PDUs as other ARQ protocols. Although it is considered to be complex, the underlying algorithms ensure there is high assurance of accuracy and efficiency. The receiver under selective repeat is designed with the ability of correcting the transmission errors noted on the received data frames. The probability of the occurrence of the errors and the time delay of the recovery mechanism has been illustrated in the three methods presented above (Bisio, 2016).

There is no much difference between GBN and selective repeat especially when implementing. Actually the efficiency of these two protocols are considered to be same up to the time when an error is detected. The implementation process should start with the establishment of the sender and receiver windows. Sufficient memory capacity should be set aside for these two sections. The window size should however be less than the half sequence number in the selective repeat protocol. This is meant to avoid packets being recognized incorrectly. The sending process can then be initiated with the new packets as long as there is an acknowledgement algorithm in place. A simulation on this process has been done on MATLAB as shown in the next section (ARQ Protocols in Cognitive Decode-and-Forward Relay Networks: Opportunities Gain, 2015).

Advantages

  1. There is no issue of retransmission
  2. Provides chances of recovering corrupt frames
  3. High efficiencies as compared to Go-Back-N protocol

Disadvantages

  1. Can be time consuming especially when the error is detected

Limitations

  1. Require relatively large CPU processing resources

Experimental setup simulation and results

The selective repeat protocol was implemented in MATLAB using distinct models and codes developed to imitate the actual behaviour of the real system. Different models were first developed before enjoining and starting the transmission process. A discussion on the models developed is provided below;

Transmitter model

This model was developed with two sub-modules; packet generation and the encoding. The packet generator was expected to generate the transmitted packets at a fixed rate. A new data is expected to be added to the buffer of the transmitter after a fixed period of time (Beard and Stallings, 2016). Every packet of data was stored in the form of [seqnum, payload] where seqnum is the sequence number of the packet and payload is the size of the packet in bits. The below codes can be used to illustrate the transmitter model used in the simulation;

Figure 9Pseudocode for transmitter model

The generated packets were then forwarded to the Vandermonde matrix encoder. The encoder basically stripes of the packets with headers and trailers and then converts them into data blocks. At this point the packets are ready for transmission over the channel. Note that the Vandermonde model makes it possible to have linear and independent packets that can be recovered easily at the receiver (Beard and Stallings, 2016).

The channel model

The channel model was designed in such a way that it is possible to simulate corrupt frames accurately. The channel was modelled with binary symmetric channel that is considered to be independent and identically distributed. The channel model keeps checking for the available data. Once the data PDU are placed on the channel, it confirms if the data is already acknowledged by the receiver (Beard and Stallings, 2016). If already acknowledged it is passed to the transmitter otherwise it is forwarded to the error sub-module then finally to the receiver. The following figure can be used to demonstrate the pseudocode that was developed for the channel model;

Figure 10Pseudocode for channel model

Receiver model

The receiver model is designed also with an error checker. The received packets are first forwarded to the error checker where they are checked for errors. If errors are found the receiver proceeds to check all the packets in the Vandermonde Matrix. A negative acknowledgement signals is then generated and sent to the transmitter. Additionally, the corrupt packets are stored in the buffer with a hope that it will be able to correct future errors. Corrupt files are discarded after a specified length of time and new packets requested in the system (Beard and Stallings, 2016). The following table can be used to illustrate the parameters that were used in the simulation;

Parameter

Value

Pkt-size

1000 bits

Lampda

1000 packets

Rate

10kbps

Tprop

20 ms

Tsim

3000 ms(Simulation time)

n

7 blocks per code

FER

Forward error rate (0, 0.2, 0.7, 0.8)

Ack-wait-time

0.001ms

MATLAB Codes

The following codes were used to simulate the selective repeat ARQ on MATLAB;

% program for protocol analysis

clc; clear all; close all;

n=input(‘Enter the data sequence length’);

m=input(‘Enter the number of packets ‘);

x=randint(m,n);

% make packet

p=zeros(1,m);

for i=1:m

for j=1:n

a=x(i,j);

b=p(i);

p(i)=bitxor(a,b);

end

pac(i,:)=[x(i,:),p(i)];

subplot(m,1,i); stem(pac(i,:));

end

xlabel(‘Transmitted Data,last bit is the parity bit’);

% send first group of packets

% send packets

figure

ba=m/8;

for k=1:m

% for l=1:8

% g=l*k;

data(k,:)=bsc(pac(k,:),.1);

subplot(m,1,k); stem(data(k,:));

% end

end

xlabel(‘Recieved Data,last bit is the parity bit’);

figure

err = 1;

erf=1;

while (err~=0)

do=data(:,n+1)’;

err=bitxor(p,do);

stem(err);

display(err);

display(‘displaying retransmitted packets’);

for i=1:m

if err(1,i)== 1

display(err);

display(‘error detected in packet no:’);display(i);

% figure

for j=i:m

data(j,:)=bsc(pac(j,:),.1);

display(j);

% subplot(m,1,j);stem(data(j,:));

end

end

do=data(:,n+1)’;

err=bitxor(p,do);

end

end

% figure

% for g=1:m

% subplot(m,1,g);stem(data(g,:));

% end

% xlabel(‘Finally Received data after retransmission, last bit is the parity bit’);

Results

The simulation of the selective Repeat ARQ protocol in Matlab;

Figure 11 Illustration of the simulation of Selective Repeat ARQ protocol

Illustration of the frame generation model;

Figure 12 Packet Generation model

Illustration of transmitter model;

Illustration of a receiver model

Figure 13Receiver model

The following figure can be used to show the results obtained from the simulation;

Figure 14 DAta PDU Transmitted

Figure 15 Data PDU received

The following figure can be used to demonstrate the errors detected during the transmission process;

Figure 16Errors detected during the transmission process

err =

0 1 0 0 0 0 0 0 0 1 0

displaying retransmitted packets

err =

0 1 0 0 0 0 0 0 0 1 0

This is further illustrated in the following figure;

Discussion

In reference to the results obtained from the above simulation it is clear that selective repeat is very efficient. The number of packets retransmitted as a result of the errors detected during the transmission process is zero. This not only reduced the time delay of the transmission but also avoids the need to invest on buffer resources that would otherwise be used to store the corrupted files. The protocol is capable of correcting errors that are realized in the transmission channel and thus improve the overall performance of the transmission process (Beard and Stallings, 2016). A typical analysis on the transmission channel is provided below;

running SRprotocolX …

Tx channel P[frameSuccess] = 0.8

sender-to-receiver communication delay = 5 Comm frames

Rx channel P[AckSuccess] = 1

receiver-to-sender communication delay = 10 Ack frames

sending 5 frames …

tic: 0

RxQ: 0 0 0 0 0 0 0 0 0 0

TxQ: 0 0 0 0 0

tic: 1

RxQ: 0 0 0 0 0 0 0 0 0 0

Selective repeate ensures that the frame is kept in the link layer and not passed to the network layer up to the point where the lower numbered frames have all been delivered. As such, the sequence of the frames is ensured and sustained. It is also important to note that the sender will only transmit frames whose NAK has been received. As such the efficiency of this system is normally very high. It thus clearly demonstrates that there will be fewer retransmissions as compared to go-back-n technique. The only disadvantage with this system is the complexity involved between the sender and the receiver since each frame must be acknowledged individually (Ali, n.d.).

The strength of the selective repeat ARQ lies on a number of aspects that are not possible on other ARQ protocols. Selective repeat ARQ is considered to be the most efficient since it does not involve retransmission of the data PDUs as other ARQ protocols. Although it is considered to be complex, the underlying algorithms ensure there is high assurance of accuracy and efficiency. The receiver under selective repeat is designed with the ability of correcting the transmission errors noted on the received data frames. The probability of the occurrence of the errors and the time delay of the recovery mechanism has been illustrated in the three methods presented above (Ali, n.d.).

Summary and conclusion

The paper has explored the different error recovery mechanisms that are used in the point-to-multipoint communication over the satellite. A variety of literature review has been undertaken to establish the most efficient and effective error recovery strategy. Different technical aspects of the stop and wait, GBN and selective repeat protocols have been evaluated. Selective repeat protocol has been found to be the most effective and efficient mode of error recovery. A simulation on the selective repeat protocol has been done using MATLAB. The probability of occurrence of the errors and PDU retransmission has been computed. It has been established that the selective repeat strategy requires relatively less amount of resources compared to other error recovery protocols.

Future work and recommendations

Even though it has been established that the efficiency of selective repeat is high, there is still many opportunities for improvements. There is need to develop mechanisms of preventing and detecting occurrence of errors in the transmission system. If this work is done the efficiency of PDU transmission will be improved greatly.

References

Ali, I. (n.d.). Architectural exploration of carrier synchronization for TDMA based satellite communication systems =.

ARQ Protocols in Cognitive Decode-and-Forward Relay Networks: Opportunities Gain. (2015). Radioengineering, 24(1), pp.296-304.

Beard, C. and Stallings, W. (2016). Wireless Com Net & Systems. Pearson Australia Pty Ltd.

Bisio, I. (2016). Personal Satellite Services. Next-Generation Satellite Networking and Communication Systems. Cham: Springer International Publishing.

Galinina, O., Balandin, S. and Koucheryavy, Y. (n.d.). Internet of things, smart spaces, and next generation networks and systems.

Hercog, D. (n.d.). Selective-repeat protocol with multiple retransmit timers and individual acknowledgments.

Hong, T., Kang, K., Ku, B. and Ahn, D. (2014). HARQ-ARQ interaction method for LTE-based mobile satellite communication system. International Journal of Satellite Communications and Networking, 32(5), pp.377-392.

Idawaty Ahmad. and Mohamed bin Othman. (n.d.). Performances of Go-Back-N ARQ schemes with block transmission.

Lam, S. (1979). Satellite Packet Communication–Multiple Access Protocols and Performance. IEEE Transactions on Communications, 27(10), pp.1456-1466.

Pujolle, G. and Puigjaner, R. (1991). Data communication systems and their performance. Amsterdam: North-Holland.

Selig, M. (n.d.). Interference Mitigation with Selective Retransmissions in Wireless Sensor Networks.

Selig, M. (n.d.). Interference Mitigation with Selective Retransmissions in Wireless Sensor Networks.

Singh, A. and Chandran, H. (2012). Low complexity FEC Systems for Satellite Communication. Network Protocols and Algorithms, 4(1).

Towsley, D. (1979). The stutter go back-N ARQ protocol. Estados Unidos: The Institute of Electrical and Electronics Engineers, Inc-IEEE.

Tsuda, D. and Ha, T. (1989). Adaptive Go-Back-N. Monterey, Calif.: Naval Postgraduate School.

Final Research Project Worksheet

Organizational Background – Summary of company’s history, vision, mission, growth, development, and core competencies

ORGANIZATIONAL BACKGROUND –

  • Samuel Moore Walton, the founder of Walmart opens the first Wal-Mart store called ‘Wal-Mart Discount City’ combining general merchandise and a full-scale supermarket store in 1962 at Rogers, Arkansas, with its headquarters in Bentonville, after years in the retail management business.

  • 1969 The company officially incorporates as Wal-Mart Stores, Inc.

  • Walmart’s USP officially becomes ‘Sell brand merchandise at low prices’. Introduced the concept of Every Day Low Prices (EDLP)

  • In 1971, a profit-sharing plan allowing employees to set aside a certain percentage of their salaries towards purchasing subsidized Walmart stock.

  • In 1972 Walmart as (WMT) is publicly traded at the NYSE

  • 1975 ‘Walmart Cheer’ is introduced as an employee morale booster

  • In1983 Walmart’s first Sam’s Club warehouse stores or Sam’s West, Inc (with membership operations) opens in Midwest City, Oklahoma followed by 5 types of store formats over the years to gain entry into otherwise closed markets.

  • Walmart opened its first Supercenter (179,000 sqft), (Walmart Express stores 15,000 sqft), Walmart Discount Store (105,000 sq feet), Neighborhood Markets, gas station/convenience store. An upcoming format under trial is the Walmart on Campus convenience stores (2,500-square foot)

  • In 1985, the ‘Made in America’ campaign is launched to compensate trade deficits and the loss of American manufacturing jobs,

  • In 1990, Walmart became America’s number-one retailer.

  • Wal-Mart gained a global presence in 1991 as it expanded into Mexico

  • Post 2000- Walmart launches drive-through pick-up option and drive-in pick-up centers, and Home delivery services,

  • Acquires  MoosejawModClothBonobos and Parcel. J

  • Introduces a 2-day free shipping

Wal-Mart operates over 5,000 stores and is the world’s largest corporation, employs over 1.6 million people million in the United States alone according to 2005 Fortune 500 list for its one-stop shopping convenience ranging from groceries to generic medication and car mechanical services. Its annual revenue is $288 billion with over $10 billion in profits.

Vision: To be the best retailer in the hearts and minds of consumers and employees.

Mission: Saving people money so they can live better.” (based on Porter’s model)

Growth: Walmart is implementing a 3-year growth strategy which includes:

  • Offering a seamless shopping experience, introducing 3D print figurines (2018)

  • 4-pronged shopping accessibility- (mobile, online, in-store, combination)

  • Pursuing an unorthodox route towards growth by shrinking its stores

  • Expanding the assortment in products

Development: Business Model: Perfect SMB solution with a simple interface, effortless automation, and a convenient pay-as-you-go plan. In the implementation of this plan were involved the following factors:

– Lead on Price

-Invest to differentiate on access

-Be competitive on assortment

-Deliver a great experience

  • Maintaining the net sales at 12% pa

  • Expanding internationally into high population areas

  • Reducing operating costs

  • Adding critical capabilities

  • Enhancing the digital relationship with the customers

Core competencies:

  • Low cost operations and EDLP (Everyday Low Prices)

  • Optimal usage of Walmart’s (JIT) Just-in-time inventory  management and IT

  • Streamlined logistics and technologies to maintain communication with suppliers and customers

  • Work culture; efficient and process driven employees

Analysis of Management Functions:

Planning. Based on what you have learned through your research, how would you characterize conditions in the planning environment? What types of problems does the organization face (e.g., structured, unstructured), and to what extent should the organizations decisions be programmed or non-programmed? How would you characterize the level certainty, uncertainty, and risk the organization faces?

Organizing. How is the organization organized? Would you characterize it as centralized or decentralized? Describe the organizational structure and evaluate how appropriate this structure is for the organization (the strengths and drawbacks of its structure). If you learned about the organizational culture, describe the culture and evaluate the extent to which the culture enhances organizational effectiveness and efficiency.

Leading. Based on your research, how would you characterize the nature of leadership of your organization? How would you characterize its leader’s leadership style, communication style and interpersonal skills? How effective is this leader?

Controlling. Describe what performance indicators are important for this company to track and measure. How effective is management at using these controls to ensure that it is achieving its goals and objectives?

PLANNING – The organization experiences problems of both types which it handles flexibly with

Unstructured problems: Example- Walmart faces the problem of increased competition from Target, Kroger and Amazon.

Structured problems: Employee overtime, salary, sexual harassment and discrimination.

The organization’s decisions are programmed with regard to customer dissatisfaction with a product.

Non-programmed decisions are taken by the company

  • To improve distribution systems

  • While competing with rivals and

  • Maintaining the customer base in times of an economic fluctuation or crisis.

ORGANIZING

Wal-Mart follows a ‘Two-tier system or Divisional Organization Structure’ at the top level and a matrix organizational structure at the store level. It includes these key elements:

  • Wal-Mart Realty

  • Wal-Mart International

  • Wal-Mart Specialty Stores

  • Sam’s Clubs and Super-centers

The Organization structure features:

  • A centralized information system.

  • Decentralized operations

  • Frequent Meetings encourages customer feedback

Strengths

  • Reacts quickly to an unstable environment.

  • Scale of operations

  • Good Employee retention rate

  • Competence in information systems

  • Wide range of products

  • Cost leadership strategy

  • International operations

  • Training and Development

  • Feedback option: Sam’s suggestion system

  • Employee advancement programs

Drawbacks

  • Creating vacancies more than necessary

  • Hampers the career growth of its specialized professionals

  • Involved in frequent labor related lawsuits

  • High employee turnover

  • Negative media publicity

  • Too much market experimentation- eg: Goodies.co

Organizational culture: It is inclusive in nature and founded on simple rules like: The ‘three basic beliefs’, ‘ten foot rule’, and ‘the sundown rule’ that have steered the organization’s culture besides implementing the basic principles or values of Walmart which include :

  • High levels of employee participation Denison (1990).

  • Respect for the Individual

  • Strive for Excellence

  • Act with Integrity

The work culture at Walmart enhances its organizational effectiveness and efficiency which is gauged by the following indicators:

  • High levels of employee participation Denison (1990).

  • Consistent incentives, appraisal and performance measurement systems

  • Maintaining appropriate error-detection and accountability systems (Schein 1999)

  • Training, identifying role models (Schein 1999)

LEADING: The organization exhibits Transformational Leadership, which is based on the 4I’s: Individual Consideration (like employee stocks), Inspirational Motivation (Wal-Mart Cheers), Intellectual Stimulation (EDLP, Universal Product Code (barcodes, private satellite system for Walmart’s inventory) and Idealized Influence (strong determination, Sundown Rule). ‘Servant Leadership’ was an outcome of this approach of leadership which involves:

  • Personal leadership

  • Resilience

  • Valuing team work

  • Delegating responsibility

  • Experimentation

Walmart has been presented with the Ron Brown Award for Corporate Leadership that recognizes companies with outstanding achievement in employee and community relations.

CONTROLLING

To gauge the organization’s Key Performance Indicators (KPI), which focuses on issues most relevant to that product’s lifecycle impacts on the environment and/or society, Walmart has closely partnered with ‘The Sustainability Consortium’

    • Modern technology and inventory system (JIT)

    • Digital communication with its suppliers and customers

    • Customer controlling via predictive analytics

    • Partnerships and alliances

    • Supply chain excellence.

    • Important performance indicators include but are not limited to spoilage per day, amount of electricity used per day, weekly employee hours, and store sales per day.

    • Employee satisfaction, turnover, performance, and absenteeism are also good to have.

Ethics and Corporate Social Responsibility

  • 1. Which ethical systems are in use in the company (Teleological, Egoism, Utiliarianism, Deontological, Justice, Relativism per the Ethical Systems link under Week 8) and why do you think so?
    2. Identify your company’s ethical issues.
    3. Explain and defend your response to these ethical issues, in light of the Markkula app and the other reading on Ethical Systems (see link under Week 8) This may be what you used in the third building block assignment, perhaps with some improvements to it).

ETHICAL SYSTEMS- Global Ethics

Walmart in its initial stages was founded upon Teleological ethics, which was a part of its Global Ethics system and includes the religious ideals of ethical involvement versus a purely rational mindset and logical approach. I think it is so, because Sam Walton the founder introduced the art of ‘Servant Leadership’ amongst his employees, which is based on Biblical principles. However, with time, the Global Ethics system seems to have overtaken the former.

ETHICAL ISSUES –

Walmart has had its share of ethical issues (5,000 lawsuits/year) as any Big Box store would, which the media has highlighted repeatedly for the following reasons:

  • It discourages labor/ trade unions

  • Its influx into communities causes a net loss

  • It thrives on immigrant and child labor

  • It underpays its women staff (SC-2011)

  • Discriminates against physically challenged or elderly staff

  • Overnight employees have been locked in to make certain their shift hours was complete

  • Enforcing work during off the clock hours

  • Thefts in employees wages observed

  • Pathetic healthcare offers

  • Employee sexual harassment

  • Uses animals to lure customers- cockfights etc

ETHICAL RESPONSE-

  • In light of the Markkula app, the organization has taken the following steps to restore its brand image in the Global market as per its Follow-up report released by its Global : Multilingual Global Ethics Helpline 24 X7

  • Corporate Social Responsibility- Scholarships for higher studies, disaster relief, wildlife habitat protection, global waste diverted from landfills, training US associates. Strengthening local communities

  • Global Anti-Corruption Policy,

  • Prohibition on the improper use of drugs and alcohol,

  • Discrimination & Harassment Prevention

  • To fully comply with all its corporate policies and procedures related to wage and hour issues and off the clock work

Conclusions

In this section you should state your opinion on what you have learned from your research. Would you want to invest your money in this company? Do you think it is a survivor? Would you buy its product or service? Is its competition better than it is? If the reader only read this section, what would you want him or her to know about this organization?

CONCLUSIONS

Though the global presence of Walmart, its position in the stock market, its work culture and principles were initially commendable, but after Sam Walton passed away, things haven’t been the same. The store still runs but the spirit doesn’t. It is a survivor, but more because of the brand name and image that one man built out of sincerity, on principles of Christian discipleship. It’s just managing to prop itself up on past glory.

I would probably still shop at Walmart for the convenience of shopping but not invest my money, because it comes across as a company that looks more out of the window than at the state of its own house. Its busy with environmental issues, corporate social responsibility, feeding the poor, educating the poor and helping evade disasters but from reports about the corporate retail giant, the following have come to light:

  • Walmart employees earn an average of $13,312 pa, while the Federal poverty cut-of for a family of 3 members is $14,630. Surprisingly, union grocery workers
    earn 30% more.

  • Contrary to its ‘Make in America’ campaign, it imported 12 billion in goods from China, 10% of US imports from China.

  • The Walmart Healthcare charges 20% of a worker’s paycheck. Employees can’t afford it

  • Walmart offers bargains which customers buy at any cost.
    Eg: Walmart customers trampled on a woman in their desire to buy a $29.00 DVD sold at one of the Walmart stores.

History has on record that even organizations can be noble or evil. Walmart fails the test in bringing out the best in people. In short, if Walmart pulls up its socks, it can regain its former glory of being the best retain store in the world ethically!

SOURCES

Bergdahl M. (2006). The 10 Rules of Sam Walton. Retrieved Apr 23, 2014, from http://www.dmmserver.com/DialABook/978/047/174/9780471748120.html

Walmart Now Has Six Types of Stores

fromhttps://247wallst.com/retail/2014/03/22/walmart-now-has-six-types-of-stores

/ Cherry K. (n.d.). What Is Transformational Leadership? Retrieved Apr 23, 2014, from http://psychology.about.com/od/leadership/a/transformational.htm

Farfan B. (n.d.). Wal-Mart Stores’ Mission Statement – People, Saving Money, Living a Better Live. Retrieved Apr 21, 2014, from http://retailindustry.about.com/od/retailbestpractices/ig/Company-Mission

Walmarts LongTterm GrowthStrategy-Try,try and try again

From: https://www.cnbc.com/2014/08/12/walmarts-long-term-growth-strategy-try-try-and-try-

From: https://www.scribd.com/doc/60889116/Executive-Summary-Walmart

Outsourcing Decisions in Business

Magna Corp

Total Cost = Total Variable Cost + Total Fixed Costs

Total Variable Costs = (30.310 * 6200 * 12) + (0.824 * 6200 * 12) (5.9/100*10*6200*12)

TVC = 2,255,064 + 61,305.6 + 43,896 = 2,360,265.6

TFC = 8,200

TC = 2,360,265.6 + 8,200 = $2,368,465.6

Per unit cost = TC/Number of units

Per unit cost = 2,368,465.6/ (6200*12) = 2,368,465.6/ 74,400

Annual Per unit Cost = $31.8342

Sun Components and Assemblies

Fixed Costs = 4,300 + 20,000 = 24,300

Declared value = 19.52 *6,200 = 121,024 a month

1 cubic foot = 1,728 cubic inches

1 unit = 12 *12* 12 = 1,728 cubic inches = 1 cubic foot

Therefore, there are 6,200 cubic feet of units a month

Variable Costs = 12 [(19.52 *6,200) + (2.20 * 6200) + (200 *3) + 100 + (4,200 * 3) + (121,024 *0.5480/100) + 1,200 + (5/100 *121,024) + (19.3/100*10*6200) + 300 + (1.210 * 6,200) + (0.15 * 121,024) + 400 + (4 * 25)]

Variable costs = 12 [121,024 + 13,640 + 600 + 100 + 12,600 + 663.21152 + 1,200 + 6,051.2 + 11,966 + 300 + 7,502 + 18,153.6 + 400 + 100]

VC = 12 [194,300.01152]

VC = 2,331,600.13824

Total Annual Cost = 24,300 + 2,331,600.1382 = 2,355,900.1382

Annual Per unit cost = 2,355,900.13824/ (6200 *12) = 2,355,900/ 74,400 = 31.6653

Percentage difference

Percentage difference =  = 0.1689/31.8342 *100 =0.5306% difference

 

 

Additional Quantitative and Qualitative Issues to consider

When making an outsourcing decision, a business considers a number of factors, other than cost implications. To begin with, it is important to consider the technologies and resources possessed by the firm that the organization wants to outsource some functions to (Schniederjans, Schniederjans, & Schniederjans ). The outsourcing firm has to consider whether the vendor has the resources to handle the outsourcing needs. Basically, the vendor’s employees have to be sufficiently trained to fulfill the assignment. The selected vendor must have a physical office with state-of-the-art technologies to deal with the most painstaking outsourcing functions.

Another consideration is the vendor’s ability to meet strict deadlines. Failure to meet deadlines leads to major bottlenecks that nullify any projected cost savings (Vagadia). What’s more, the vendor has to comply with the expected quality standards. Therefore, if the company establishes that a vendor has poor quality control measures and lacks a solid backup plan in case it misses a deadline, it is prudent to not hire the vendor (Schniederjans et al.).

Before making an outsourcing decision, a firm has to look into the potential vendors’ past production records (Schniederjans et al.). From their history, the firm can establish which vendors are able to work under minimal supervision. It is imperative for the outsourcing organization to choose a vendor that is self driven and can work under minimal supervision. That would allow the business to concentrate on its core capabilities.

Outsourcing decisions should also take into consideration the effect an outsourcing decision can have on the brand (Schniederjans et al.). Some businesses derive their advantage from the uniqueness of the products they make. The decision to outsource might lead to the leak of business secrets. Therefore, a business must ensure that outsourcing does not risk exposing the qualities that make it unique and successful (Vagadia). That require due diligence to establish whether potential vendors have demonstrated a tendency to keep their business partners’ secrets.

Businesses also consider the time zones of potential vendors before making outsourcing decisions (Vagadia). Huge differences in time zones can make coordination between the two firms difficult. In contrast, it is easier to liaise with and coordinate activities with the vendor, when the time zones are close. While it is not one of the most important considerations, it is one that businesses considering outsourcing ought to critically look at (Schniederjans et al.).

Efficiency of communication is also critical to any prudent outsourcing decision. Undeniably, outsourcing alters communication efficiency between different organizational departments and management (Schniederjans et al.). That is partly because of language and cultural differences between the organization and the vendor. However, organizations have the responsibility to assess and outsource to a vendor with whom communication would be most efficient (Vagadia).

Businesses should consider whether the vendor they outsource to respects intellectual property rights (Vagadia). Some countries have a poor culture on intellectual property; they infringe on property rights, without any legal remedies. While it is possible to manage local outsourcing and near-shore alternatives with non-disclosure agreements and contracts, more comprehensive measures are needed when dealing with outsourcing alternatives that are located in distant corners of the globe (Schniederjans et al.).

The company ought to reflect on the lead time of the vendors’ production processes. Essentially, lead time affects the time the company will take to get its finished products into the market (Schniederjans et al.). Therefore, a company with long with shorter lead times would be a better outsourcing destination. It would enable the outsourcing company to make and deliver its products to the market within a shorter time (Schniederjans et al.).

Publicly traded companies have to consider the impact an outsourcing decision might have on the value of the company (Vagadia). Therefore, before outsourcing, it is imperative that such companies consider the perceptions of investors in case information leaks out. While it is not expected that information should leak out, it is always good to weigh in all aspects of the decision (Vagadia). For instance, if the company outsources to a vendor that has a poor reputation when it comes to human rights violations, investors would lose confidence in it and its stocks would plummet. In contrast, outsourcing to a reputable vendor, stocks are likely to go up.

The organizational culture is another factor that influences the choice of vendor or outsourcing partners. Organizations are always on the lookout for business partners who have organizational cultures that are similar to theirs (Vagadia). Therefore, a business with flexible work schedules and a calm, friendly working environment would not want to partner with another firm that has rigid schedules. That would be in contravention to organizational mission and values that inform the cultures (Schniederjans et al.). It would also negatively affect the outsourcing representatives that the company sends to assess and monitor some issues at the vendor’s facilities.

Strategies to Reduce Overall Costs

The Hong Kong option can be significantly reduced by reducing the shipping lead time. At two months, a lot of things can change in the market as the company waits to receive the materials from the outsourcing partner (Schniederjans et al.). That would lead to losses that will outweigh the cost benefits enjoyed due to the outsourcing process (Vagadia). Therefore, if the shipping lead time from Hong Kong can be reduced by two weeks or a month, it would reduce the overall costs incurred by the outsourcing company (Vagadia).

Cost reductions can also be reduced by choosing the Free on Board (FOB) option instead of the Cost, Insurance, Freight (CIF) option that is currently on offer. Under CIF, the seller assumes all costs and liabilities associated with shipping of the goods until they reach the buyer (Schniederjans et al.). On the other hand, under FOB, the buyer assumes responsibility of the goods and costs associated with them once the goods are shipped. Therefore, responsibility is transferred to the seller once the goods are loaded. However, using FOB, the buyer (outsourcing company) negotiates directly with the involved parties. Therefore, it can get cheaper insurance and freight costs (Schniederjans et al.). On the contrary, buyers can connive with insurers and shipping agents to increase costs so as to increase profits that can be shared between the two. Communication with the shipping agents, while the goods are on transit, is also improved by the use of FOB since the buyer and the shippers are in direct contact (Schniederjans et al.).

The outsourcing company can also reduce overall costs by liaising with and importing goods together with another company, in one shipment (Vagadia). While the costs that are dependent on the declared value and size of the cargo are unlikely to change, some costs such as ocean transportation costs, port handling charges, customs brokerage fees, foreign exchange hedging costs and inland container transportation costs can be shared out with another interested company. That would reduce the cost per unit significantly and allow the company to charge friendlier prices for its finished products (Vagadia, ). Even a one-dollar reduction in unit prices would give the company an edge against its competitors.

The Magna Corp overall costs can also be reduced by reducing the packaging costs. Since the goods are delivered to one client, for production purposes, each unit should not be packaged separately. That increases the space the products occupy and increase the cost of packaging the products (Schniederjans et al.). Therefore, the company can agree with the vendor to have the gadgets packaged in dozens or even 24 pieces. That would significantly reduce packaging costs. The impacts of the business on the environment will also be drastically reduced by the move. That would improve the company’s reputation in the eyes of customers and the general public (Schniederjans et al.).

Better stock control would also help improve overall costs incurred by the company (Schniederjans et al.). Through optimization of stocks, the company can either increase warehousing costs or decrease shipping costs. In some instances, it would be prudent to make larger shipments that would need more warehousing space. However, while the cost of warehousing and holding the stocks will be high, the goods will be shipped under fixed ocean transportation, travel and training costs, and port handling and container costs (Vagadia). It is, however, upon the company to establish the strategy that reduces costs; either increasing frequency of shipping or warehousing and holding costs (Vagadia). That can be achieved through some simple optimization equations.

The company can also significantly reduce the annual travel and training costs (Schniederjans et al.). For instance, instead of travelling, some meetings can be held through video conferencing. That would almost eliminate hotel costs that would be incurred if officials travel to Hong Kong for meetings. Basically, officials will only have to travel when they have to physically inspect the products being manufactured by the vendor; part of what Vagadia () describes as unavoidable costs in outsourcing. It, however, would be difficult to lower training costs as employees would still need to be trained on how to handle outsourcing functions and interact with the vendor’s representatives (Vagadia).

Works Cited

Schniederjans, Marc J, Ashlyn M Schniederjans and Dara G Schniederjans. Outsourcing and Insourcing in an International Context. London, UK: Routledge, 2015.

Vagadia, Bharat. Strategic Outsourcing: The Alchemy to Business Transformation in a Globally Converged World. New York, NY: Springer Science & Business Media, 2013.

 

 

Outsourcing Decisions in Business: Ford Motor Corporation

Executive Summary

The process of outsourcing entails transferring non-core activities and the relevant assets to another company to perform the activities for the benefit of the outsourcing company. To qualify as outsourcing, the activities must be transferred to a completely different company. Over time, companies have shifted from outsourcing physical parts only to outsourcing intellectually-based activities. At the heart of most outsourcing decisions is cost reduction.

In that vein, the Ford Motor Corporation considers the cost implications of outsourcing to two companies: the Magna Corps, based in Canada, 20 kilometers away from Ford facilities and the Sun Components and Assemblies based in Hong Kong, China. Analysis reveals that the Ford Motor Corporation would incur $2,368,465.6 and $2,355,900.1382 a year by outsourcing manufacture of the electronic navigation module to Magna Corps and Sun Components & Assemblies, respectively. That translates to $31.8342 and $31.6653 per unit. Hence, the Hong Kong option is 0.5306% cheaper that outsourcing to the Canada-based firm. However, the decision cannot be made on cost benefits alone.

The Ford Motor Corporation has to consider other factors such as ability to meet deadlines, possession of the requisite resources and technologies, past production records, communication efficiency, lead time, ability to protect intellectual property rights, time zone of vendor and the effect of outsourcing on organizational reputation. In addition, it is important to look into avenues for further cost reductions. Outsourcing to Sun Components presents opportunities for further cost reductions, but it also comes with its own disadvantages, such as increasing labor costs in Hong Kong, communication difficulties, incentive to protect intellectual property and unforeseeable maritime risks. The Magna Corps option is difficult to further cut costs but the risks involved are less. After considering all the factors, the paper recommends that Ford outsources to Canadian-based Magna Corps.

 

Introduction

In the last few years, outsourcing has gained traction in the business world. However, over the decades, even centuries, businesses have engaged in the outsourcing of practice (Schniederjans, Schniederjans and Schniederjans). Still, the scale and scope of the practice has exponentially increased in the technological era. Initially, it was very similar to vertical integration because larger companies bought-off their suppliers to reduce production costs. However, that evolved into the present day outsourcing that many businesses consider unavoidable. Actually, in the contemporary business world, some consider it a solution for most, if not all, companies (Vagadia).

In principle, outsourcing refers to the practice of relying on external resources to perform some of a company’s functions. An outsourcing company transfers out an activity, and the relevant asset, to outside suppliers (Vagadia). The outsourcing partners, also known as vendors, perform the tasks given to them and deliver the finished product or service to the outsourcing company.

Initially, companies only outsourced production of physical parts. However, that has shifted towards the outsourcing of intellectually-based activities, such as marketing, research and logistics. Still, no business outsources its core activity; it would deprive it of its ability to complete effectively. Therefore, they outsource peripheral activities and focus on their core competencies.

However, it is worth noting that outsourcing must straddle organizational boundaries (Vagadia). Therefore, starting another production plant or relocating some functions to another facility does not qualify to be outsourcing. It must involve transferring the activities to another organization that is completely independent of the outsourcing company (Schniederjans et al.). If it is within the same company, it is just relocation, not outsourcing.

Total Annual Cost and Cost per Unit

Most outsourcing decisions have been based on the desire to reduce production costs (Schniederjans et al.). Therefore, businesses consider a number of alternatives and settle for the one that offers it the highest cost reductions; total and per unit. However the latter is a more accurate means of measuring cost effectiveness of an option because it builds into the actual cost of the final product (Schniederjans et al.).

Magna Corp

Total Cost = Total Variable Cost + Total Fixed Costs

Total Variable Costs = (30.310 * 6200 * 12) + (0.824 * 6200 * 12) (5.9/100*10*6200*12)

TVC = 2,255,064 + 61,305.6 + 43,896 = 2,360,265.6

TFC = 8,200

TC = 2,360,265.6 + 8,200 = $2,368,465.6

Per unit cost = TC/Number of units

Per unit cost = 2,368,465.6/ (6200*12) = 2,368,465.6/ 74,400

Annual Per unit Cost = $31.8342

Sun Components and Assemblies

Fixed Costs = 4,300 + 20,000 = 24,300

Declared value = 19.52 *6,200 = 121,024 a month

1 cubic foot = 1,728 cubic inches

1 unit = 12 *12* 12 = 1,728 cubic inches = 1 cubic foot

Therefore, there are 6,200 cubic feet of units a month

Variable Costs = 12 [(19.52 *6,200) + (2.20 * 6200) + (200 *3) + 100 + (4,200 * 3) + (121,024 *0.5480/100) + 1,200 + (5/100 *121,024) + (19.3/100*10*6200) + 300 + (1.210 * 6,200) + (0.15 * 121,024) + 400 + (4 * 25)]

Variable costs = 12 [121,024 + 13,640 + 600 + 100 + 12,600 + 663.21152 + 1,200 + 6,051.2 + 11,966 + 300 + 7,502 + 18,153.6 + 400 + 100]

VC = 12 [194,300.01152]

VC = 2,331,600.13824

Total Annual Cost = 24,300 + 2,331,600.1382 = 2,355,900.1382

Annual Per unit cost = 2,355,900.13824/ (6200 *12) = 2,355,900/ 74,400 = 31.6653

Percentage difference

Percentage difference =  = 0.1689/31.8342 *100 =0.5306% difference

 

Therefore, outsourcing to Sun Components and Assembly, Hong Kong, is 0.5306% cheaper than outsourcing to Magna Corps, Canada. Therefore, if the business is looking for a purely cost effective alternative, it would settle for the latter option. However, in the real business world, cost effectiveness is just but one of many factors that are considered.

Additional Quantitative and Qualitative Issues to consider

When making an outsourcing decision, a business considers a number of factors, other than cost implications. To begin with, it is important to consider the technologies and resources possessed by the firm that the organization wants to outsource some functions to (Schniederjans, Schniederjans, & Schniederjans ). The outsourcing firm has to consider whether the vendor has the resources to handle the outsourcing needs. Basically, the vendor’s employees have to be sufficiently trained to fulfill the assignment. The selected vendor must have a physical office with state-of-the-art technologies to deal with the most painstaking outsourcing functions.

Another consideration is the vendor’s ability to meet strict deadlines. Failure to meet deadlines leads to major bottlenecks that nullify any projected cost savings (Vagadia). What’s more, the vendor has to comply with the expected quality standards. Therefore, if the company establishes that a vendor has poor quality control measures and lacks a solid backup plan in case it misses a deadline, it is prudent to not hire the vendor (Schniederjans et al.).

Before making an outsourcing decision, a firm has to look into the potential vendors’ past production records (Schniederjans et al.). From their history, the firm can establish which vendors are able to work under minimal supervision. It is imperative for the outsourcing organization to choose a vendor that is self driven and can work under minimal supervision. That would allow the business to concentrate on its core capabilities.

Outsourcing decisions should also take into consideration the effect an outsourcing decision can have on the brand (Schniederjans et al.). Some businesses derive their advantage from the uniqueness of the products they make. The decision to outsource might lead to the leak of business secrets. Therefore, a business must ensure that outsourcing does not risk exposing the qualities that make it unique and successful (Vagadia). That require due diligence to establish whether potential vendors have demonstrated a tendency to keep their business partners’ secrets.

Businesses also consider the time zones of potential vendors before making outsourcing decisions (Vagadia). Huge differences in time zones can make coordination between the two firms difficult. In contrast, it is easier to liaise with and coordinate activities with the vendor, when the time zones are close. On the other hand, different time zones can be advantageous because the cooperation is akin to running 24-hour production. When it is daytime in Canada, it would be night in Hong Kong, and vice versa. The vendor will then be able to take instructions from the outsourcing organization and act on it, while enjoying the benefits of the time difference.  While time zone is not one of the most important considerations, it is one that businesses considering outsourcing ought to critically look at (Schniederjans et al.).

Efficiency of communication is also critical to any prudent outsourcing decision. Undeniably, outsourcing alters communication efficiency between different organizational departments and management (Schniederjans et al.). That is partly because of language and cultural differences between the organization and the vendor. However, organizations have the responsibility to assess and outsource to a vendor with whom communication would be most efficient (Vagadia).

Businesses should consider whether the vendor they outsource to respects intellectual property rights (Vagadia). Some countries have a poor culture on intellectual property; they infringe on property rights, without any legal remedies. While it is possible to manage local outsourcing and near-shore alternatives with non-disclosure agreements and contracts, more comprehensive measures are needed when dealing with outsourcing alternatives that are located in distant corners of the globe (Schniederjans et al.).

The company ought to reflect on the lead time of the vendors’ production processes. Essentially, lead time affects the time the company will take to get its finished products into the market (Schniederjans et al.). Therefore, a company with long with shorter lead times would be a better outsourcing destination. It would enable the Ford Motor Corporation to make and deliver its products to the market within a shorter time (Schniederjans et al.).

Publicly traded companies have to consider the impact an outsourcing decision might have on the value of the company (Vagadia). Therefore, before outsourcing, it is imperative that such companies consider the perceptions of investors in case information leaks out. While it is not expected that information should leak out, it is always good to weigh in all aspects of the decision (Vagadia). For instance, if the company outsources to a vendor that has a poor reputation when it comes to human rights violations, investors would lose confidence in it and its stocks would plummet. In contrast, outsourcing to a reputable vendor, stocks are likely to go up.

The organizational culture is another factor that influences the choice of vendor or outsourcing partners. Organizations are always on the lookout for business partners who have organizational cultures that are similar to theirs (Vagadia). Therefore, a business with flexible work schedules and a calm, friendly working environment would not want to partner with another firm that has rigid schedules. That would be in contravention to organizational mission and values that inform the cultures (Schniederjans et al.). It would also negatively affect the outsourcing representatives that the company sends to assess and monitor some issues at the vendor’s facilities.

Strategies to Reduce Overall Costs

The Hong Kong option can be significantly reduced by reducing the shipping lead time. At two months, a lot of things can change in the market as the company waits to receive the materials from the outsourcing partner (Schniederjans et al.). That would lead to losses that will outweigh the cost benefits enjoyed due to the outsourcing process (Vagadia). Therefore, if the shipping lead time from Hong Kong can be reduced by two weeks or a month, it would reduce the overall costs incurred by the Ford Motor Corporation (Vagadia).

Cost reductions can also be reduced by choosing the Free on Board (FOB) option instead of the Cost, Insurance, Freight (CIF) option that is currently on offer. Under CIF, the seller assumes all costs and liabilities associated with shipping of the goods until they reach the buyer (Schniederjans et al.). On the other hand, under FOB, the buyer assumes responsibility of the goods and costs associated with them once the goods are shipped. Therefore, responsibility is transferred to the seller once the goods are loaded. However, using FOB, the buyer (Ford Motor Corporation) negotiates directly with the involved parties. Therefore, it can get cheaper insurance and freight costs (Schniederjans et al.). On the contrary, buyers can connive with insurers and shipping agents to increase costs so as to increase profits that can be shared between the two. Communication with the shipping agents, while the goods are on transit, is also improved by the use of FOB since the buyer and the shippers are in direct contact (Schniederjans et al.).

The outsourcing company can also reduce overall costs by liaising with and importing goods together with another company, in one shipment (Vagadia). While the costs that are dependent on the declared value and size of the cargo are unlikely to change, some costs such as ocean transportation costs, port handling charges, customs brokerage fees, foreign exchange hedging costs and inland container transportation costs can be shared out with another interested company. That would reduce the cost per unit significantly and allow the company to charge friendlier prices for its finished products (Vagadia, ). Even a one-dollar reduction in unit prices would give the company an edge against its competitors.

The Magna Corp overall costs can also be reduced by reducing the packaging costs. Since the goods are delivered to one client, for production purposes, each unit should not be packaged separately. That increases the space the products occupy and increase the cost of packaging the products (Schniederjans et al.). Therefore, the company can agree with the vendor to have the gadgets packaged in dozens or even 24 pieces. That would significantly reduce packaging costs. The impacts of the business on the environment will also be drastically reduced by the move. That would improve the company’s reputation in the eyes of customers and the general public (Schniederjans et al.).

Better stock control would also help improve overall costs incurred by the company (Schniederjans et al.). Through optimization of stocks, the company can either increase warehousing costs or decrease shipping costs. In some instances, it would be prudent to make larger shipments that would need more warehousing space. However, while the cost of warehousing and holding the stocks will be high, the goods will be shipped under fixed ocean transportation, travel and training costs, and port handling and container costs (Vagadia). It is, however, upon the company to establish the strategy that reduces costs; either increasing frequency of shipping or warehousing and holding costs (Vagadia). That can be achieved through some simple optimization equations.

The company can also significantly reduce the annual travel and training costs (Schniederjans et al.). For instance, instead of travelling, some meetings can be held through video conferencing. That would almost eliminate hotel costs that would be incurred if officials travel to Hong Kong for meetings. Basically, officials will only have to travel when they have to physically inspect the products being manufactured by the vendor; part of what Vagadia describes as unavoidable costs in outsourcing. It, however, would be difficult to lower training costs as employees would still need to be trained on how to handle outsourcing functions and interact with the vendor’s representatives (Vagadia).

The overall costs of both options can be diminished by reducing damage during transit (Vagadia). It is fundamentally very difficult to get all products from one point to another while in good condition. However, efforts must be put to reduce the number of damaged goods per shipment (Schniederjans et al.). The first step is limiting movement of the products inside the container while on transit. That would mean filling the container to limit space for movement or tying up the goods in position. As a result, even as the vehicle, ship or truck, moves, the goods will still be held in position. That will eliminate the likelihood of the products colliding while in the truck (Vagadia). Another option would be using soft material to shield the products and absorb any shock emanating from the natural movement of the vehicle.

Recommendations

To begin with, the Sun Components and Assemblies, Hong Kong, option is cheaper by 0.5306% than the Magna Corps option. Hence, outsourcing the function to the Hong Kong would be cheaper, in terms of the cost per unit. However, such a distant option would have some hidden costs that the Ford Motor Corporation is unlikely to foresee before taking the decision (Vagadia). For instance, turbulence at sea is unforeseeable but it can affect maritime activities if it happens.

Still, the Hong Kong vendor presents more options though which overall costs can be reduced. The options include switching from FIC to FOB, reducing traveling costs, reducing shipping lead time, better stock control, and liaising with other importers to import together and share some costs (Schniederjans et al.). On the other hand, only two viable options exist that can be employed by Magna Corps; reducing packaging costs and reducing risk of damage while goods are on transit. Therefore, the Sun Components and Assemblies option can further reduce costs, thus diminishing the cost per unit of outsourcing the service.

Despite the cost-saving options available to Sun Components and Assemblies, higher tariffs in the future and increased custom duty rates would increase the option’s costs significantly (Schniederjans et al.). What’s more, it poses more risks on the intellectual property of the Ford Motor Corporation than the magna Corps options (Schniederjans et al.). Like the Ford Motor Corporation, the latter is also found in Canada and, hence, operates under the same jurisdiction. For that reason, it has more incentive to preserve intellectual property rights than the Hong Kong- based option.

What’s more, owing to its closeness to the Ford Motor Corporation, magna Corps can get the goods to the outsourcing company on a shorter notice than Sun Components and Assemblies. In addition, they can make products that satisfy the company’s demand at that time. In contrast, the Hong Kong based vendor has to manufacture a lot of products in advance. Consequently, while the company can ask Magna Corps to make subtle alterations to the product on a short notice, Sun Components and Assembly requires early notification because it has to manufacture the products in advance for shipping (Vagadia).

Therefore, while the Hong Kong option is cheaper and has the potential to become even cheaper, it is very susceptible to changes in tariffs, natural disasters at sea, more rigid, long shipping lead time and is a risky option when it comes to protection of intellectual property rights (Schniederjans et al.). In addition, labor costs are going up at a very high rate in China (Vagadia). Hence, very soon, the cheaper labor that attracts outsourcing companies to the country will no longer be available. As a result, I would recommend that the company outsources the production to Magna Corps, Canada. It is marginally more expensive but is less risky and flexible to modern day business needs. For instance, if simple modifications are needed, they would be implemented faster through Magna Corps. In short, the marginal difference in per unit cost is outweighed by the other factors that the business ought to consider.

 

 

 

 

 

 

 

 

 

Works Cited

Schniederjans, Marc J, Ashlyn M Schniederjans and Dara G Schniederjans. Outsourcing and Insourcing in an International Context. London, UK: Routledge, 2015.

Vagadia, Bharat. Strategic Outsourcing: The Alchemy to Business Transformation in a Globally Converged World. New York, NY: Springer Science & Business Media, 2013.

 

 

Consent Discussion

Consent Discussion

  1. What is consent?

Consent refers to the concept in which an individual gives permission for someone else to engage in a sexual activity with them. The parties involved in this activity must not feel pressured and they should be able to stop it at any point if they do not feel comfortable or in instances where they do not want to go ahead with it (Cowling, 112). Thus, if a person withdraws their consent at any point, it should be taken seriously since any sexual act that goes against this can be deemed as rape or harassment.  Without the consent, the sexual behavior or action elicits physical, psychological, and emotional torture coupled with fear and intimidation.

The consent to engage in sexual activity at any point does not guarantee that the permission will hold for any subsequent actions. One cannot presume that the person gave them a go ahead at one point and this means that they can engage in similar acts in the future without seeking their permission (Williams, 24). Every individual willing to engage in these acts has a right to sovereignty. Thus, it is the responsibility of the individual initiating the act to get permission before they start. Even so, both parties must be in agreement and this means that clarification must always be sought especially in situations where both parties are not sure on whether it is okay to start or to continue with the activity.

  1. What messages do media send about sexual consent and sexual violence?

The media has to a great extent sent misleading messages regarding sexual consent and sexual violence. Most of the time, the men and women are objectified especially in advertisements that promote alcohol use by associating them with success and sexual power. The media portrays the victims as people who brought these problems on themselves. Blaming the victim makes it possible for other people involved to separate themselves from the sexual encounter. Thus, they seem to confirm that such things cannot happen to them.

The media also makes the issue of sexual consent and sexual violence to appear like an unpleasant occurrence that only happens to certain individuals. When people watch such stories, they tend to formulate reasons in their heads that make them feel like they are better than the victims (Cowling, 126). People feel like they are not like the people who fall into such traps easily. Rather, they believe that they would have made better choices if faced with situations where they might end up being sexually harassed. Such reactions on the issues of sexual consent and sexual violence are not helpful to the victims and to other people who rely on the media as a vital source of information.

The messages sent by the media ends up normalizing sexual assault and violence by shifting the blame to one side of the coin. If people know that the society will blame them for playing a role in bringing harm to them, then they will not feel comfortable stepping out and sharing their story (Williams, 25). Somehow, they will come to a conclusion that their actions might have played a key role in bring the misfortune upon themselves. It then becomes normal to find members of the society reinforcing the thoughts of the abusers in blaming the victims of these incidents. Additionally, it now becomes normal to find these people leaving the victims with the full responsibility of fixing the situation. Such blame games allow the society to normalize sexual assault and violence because they will always find a way to avoid being accountable for their actions.

The messages from the media also create an environment where sexual violence and assault are pervasive and they are accepted as things that cannot be avoided. In this environment, the individuals and members of the society do not have to take an active role in promoting sexual violence (Cowling, 84). Rather, they still normalize the act through a series of false beliefs that have not been examined and proven. The media does this through song lyrics with harmful and confusing messages, images that portray humans as sexual beings, making people to appear like sexual objects in advertisements, use of language that underestimates sexual assault, and blaming the victim.

The media also plays a role in normalizing sexual violence by teaching the masses how not to get raped instead of targeting the issue which is why people should refrain from engaging in acts of sexual assault and sexual violence (Williams, 27). When this is done, the victims also underestimate what happened to them and they may not consider it as rape. They may think that they are going overboard by reporting the violence, seeking help, and even finding someone to talk to. In instances where the victims are convinced that they are right, they still hold back because they feel like they are to blame and this means that even when they choose to go ahead and report the incident, no one will believe them. All these feelings stem from the fact that the media normalizes these incidents all thanks to the way in which these events are reported and the messages shared across social platforms.

  1. What things are women taught to do before they go outside to protect themselves from being raped? Create a list
  2. First of all, women are always taught to always be cautious about their kind of hairstyle since it may render them as potential victims. Hairstyles like the braid and the pony are not ideal for events where partying and binge drinking is taking place since it makes it easy for them to be easily grabbed by the hair.
  3. b) The need to be cautious of their dressing styles and the kind of messages they send out. Clothes that are too revealing makes men to look at them in a sexual manner and this may make them vulnerable.
  4. c) The importance of appreciating the need to communicate their desires effectively when need arises. Through this, they can create formidable relationships that protect their interest.
  5. d) The need to avoid being alone in isolated areas. Thus, if they come into contact with people who try to lead them to these areas, then they get away from them quickly.
  6. e) The need to stay clear of alcohol and other drugs since it is easy for people to take advantage of them when under the influence. If they must engage, then they must set their consumption limits.
  7. e) To always attend group events in the company of friends they can trust since they will be safer if they have someone looking out for them.
  8. f) To always be aware and alert of their environment. For instance, if they go to crowded spaces, they should know where the exits are and how to explain themselves in instances where they feel lost.
  9. g) To always trust their gut instincts. This will enable them to react immediately if they find themselves in places that make them uneasy.
  10. What are men taught to do before they go outside to protect themselves from being raped?
  11. To always watch their drink or avoid drinking altogether. It is easy for someone to slip a date rape drug in unattended drinks without leaving any evidence. Thus, one should finish up their drink if they have to step out for a moment.
  12. b) To always try and find out any relevant information about the areas they intend to visit. This will enable the men to know the available resources that could come in handy whenever they find themselves in areas that potentially leave them vulnerable to assault. The concept also applies even for those who have to travel to foreign countries since they would rely on the resources of these countries like the laws and emergency centers to report such incidences.
  13. c) To always trust their instinct. The men must always speak up if they find themselves in uncomfortable situations even when they are on a date with someone they have learnt to trust to some extent.
  14. d) The men must always party responsibly. At this time and age, colleges and campuses are filled with so many recreational drugs and one may end up drugging themselves if they are careless which would leave them vulnerable since they will not be able to consent to sex. In such cases, the perpetrator takes full control.
  15. e) The need to protect themselves and their personal stuff. Sometimes, it is important for one to be equipped either with the defense skills or tools that could help them ward off perpetrators. For instance, paper spray could come in handy in leaving the perpetrator powerless. Self-defense skills could also help them to fight and flee when in such situations.
  16. Create a list to give to men of how to not rape or to not perpetuate a rape culture.

For the rape culture to wear off and eventually die, there are a number of things that men can do to not perpetrate the rape culture. These include:

  1. Avoiding the use of language that degrades or makes women to appear as sexual objects.
  2. Always be on the lookout for people who cannot defend themselves in situations where someone else trivializes rape or makes offensive jokes against them.
  3. Being supportive of those who come out to report that they have been raped or sexually assaulted.
  4. Taking time to think carefully on the messages they send to the media regarding violence, relationships, and sexual assault issues.
  5. Always trying to respect every individual’s physical space in both casual and formal situations.
  6. Always trying to address the issue instead of shifting the blame to the victims and other vices like alcohol and drugs. Additionally, it is also important for the men to ensure that perpetrators are held accountable for their actions.
  7. Trying to always communicate their sexual intentions with their partners. It is not right to always assume that the person is in agreement.
  8. Being active bystanders that do not let stereotypes shape how they act or perceive things to be.

Preparing this list has been a great eye-opener for me since it has brought to light some of the notions that advance the rape culture. Men are out rightly involved in every way in creating environments where sexual violence is normalized. Most of the time, their actions are excused in popular culture and the media. All issues associated with inappropriate language, objectifying the bodies of women, and glamorizing issues of assault are often perpetrated by the men and this leads to the creation of a society where the rights and safety of women are disregarded.

From this list, one thing that is clear is the fact that men play a very crucial role in advancing or impeding efforts to create an environment that is safe for both men and women. The men have always been vocal and they have the right channels for airing issues in societies where people tend to listen to them more compared to their female counterparts. Most times, the men are also the perpetrators of these acts of violence since they often have the dominant force in a relationship coupled with the physical strength to subdue their victims. The men have also been known to objectify and demean women by always referring to them as the weaker sex. Most times, they even find reasons to blame the women in the event of an assault and absolve themselves of the blame even when it is clearly evident that they are on the wrong. Thus, it is proper for them to step up and take the lead role in stopping the rape culture.

  1. What are some of the barriers men face when trying to challenge sexist behavior?
  2. Most times, it is not easy for fellow men to accept that they too can be victims of rape or sexual assault. The concept thus makes it hard for men to come forward and speak out of the fear of being associated with the weaker sex which in this case are the women.
  3. b) Another challenge comes from the fact that the sexual assault may be committed by someone well-known to the victim. Thus, people may opt to live in denial where they even seek reasons to justify their abuser’s acts. Doing this makes it hard for the ral issues to be addressed.
  4. c) The motivation behind sexual assault is often power, hostility, and control. In every society, it is not easy to find all members having all the three concepts. Thus, the less vulnerable in terms of age and sex will always be the victims.
  5. d) Sexual offenders often come from all types of backgrounds including race, age, gender, education, and occupation. These people always appear descent and it makes it hard for people to step forward and report when they are assaulted by them since they may not appear credible. Thus, it is important for the challenge to be addressed to ensure that everyone is comfortable enough to speak out without the fear that thy will be judged harshly.
  6. e) Sometimes, the victims of rape are married couples or people who have dated for a long time. Most people find it hard to believe that sexual assault can actually happen within the confines of such unions. Thus, people tend to shy away instead of speaking out and letting their issues be addressed.

How can coaches, teachers, athletes, and entertainers use their influence to challenge men’s violence against women?

Coaches, teachers, athletes, and entertainers can use their influence to challenge men’s violence against women in a number of ways. First of all, the coaches can help create and reinforce a culture of respect by insisting on the dominance and superiority of women over men in certain sports (Cowling, 182). These two groups can be put on friendly events where they get to interact and view each other beyond their physical strength. On the other hand, teachers are influential in molding students and shaping various aspects of their character which influence how they perceive women. For instance, they can come up with an appreciation strategy in which both male and female students are challenged to treat each other with respect.

Athletes have a wide following and this means that they have the platform to reach out t masses. These people can come up with advertisements or group initiatives geared towards embracing the important role of women in the society. Through this coupled with their influence, it is highly likely that they will change how society responds to issues of violence against women. On the other hand, entertainers can use the media to send positive messages that embrace womanhood.  For instance, they can create movies in which women take lead role in challenging the male stereotypes set up by society. Through this people will get to learn and appreciate the important role that each member of the society plays.

 

 

 

 

 

 

Work Cited

Cowling, Mark. Making sense of sexual consent. Routledge, 2017.

Williams, Christine L. “Sexual harassment in organizations: A critique of current research and policy.” Sexual Harassment and Sexual Consent. Routledge, 2018. 20-43.

 

Market penetration (existing markets, existing products)

Market penetration (existing markets, existing products)

Successful managers comprehend that if their company is to expand and remain competitive in the dynamic business environment, they cannot stick with the mindset of “business as usual” in the long run even when things are going well within the organization. They must find new ways to reach new clients and increase sales, revenues, and profits. There are various options available such as opening new markets, developing new products or market penetration. Ansoff Matrix can be used to know which strategy will work best for our organization, to help the managers devise the most suitable plan for our situation, and to think about the possible risks of each option.  For our existing products, market penetration would be the best strategy. When our company enters the markets with its present products such as Surfboard, Foils, Bicycle, and Skies, it is referred to as market penetration. This is carried out by taking part in the market share of the rival companies. Other ways of penetrating the markets may be to get current clients to utilize more products or to find new customers for our goods. Market penetration is viewed as a low-risk strategy that we can use to grow and expand our company. It is the safest of the five options as here, the business focus on increasing sales of its existing products in its current markets. The markets hold few surprises for the business entity, and the stakeholders know the products work.

The market penetration strategy will enable the company to increase sales of its existing goods in the markets and therefore increasing our market share. It involves minimal risks, and therefore management of the company can think about using this option. To do this, the organization can attract clients away from our rival companies and/or ensure that our clients purchase our present goods more often. Our business entity can achieve this by reducing the price, increasing promotion and distribution support, modest refinements of products, and the acquisition of a competitor in the same markets. This implies that the corporation will be able to increase sales of existing products on the current markets more than their competitors. The market penetration involves an evaluation of how present offers of the firm can be sold in the existing markets or how to expand the current markets. Our company will have to offer improved quality services than the rival companies to gain and sustain the competitive advantage in the market. The business will accomplish this by creating various customer segments and offering each section with goods and services that meet or even exceed their demands.

The market penetration strategy will enable the business to grow faster. The market penetration is the most effective strategy to use because our marketing and business objective is to expand our customer base. When we offer better prices than our rival companies, luring out the customers will be easier than previously anticipated. Subsequently, faster growth is heavily related to relatively low rates. It means that the more reasonable the prices are, the more rapid the growth of the organization as it will make more sales and therefore earn more revenues and profits. Besides, the strategy will also enable the business to gain economic advantage. Indeed, if the development of industry goes the way we anticipated and hoped, market penetration can bring cost advantages. Low prices, which assure the growth of client base implies that the company can increase the quantities of products ordered from the vendors that shall lead to higher profits earned from the low prices. Moreover, our company can risk more and initially purchase products in bulk and for discounts and then execute the market penetration strategy. The option will allow our organization to combat competition. Combating our competitors is one of the excellent parts of market penetration. Imagine the company has competitors who are attempting to transform and expand; they are robbing clients from our firm which lead to lower revenues and profits. Therefore, if the business entity is still willing to remain as the leader of the market, the only option it has is to outplay the rival companies.  For example, low upfront prices shall force our rivals to change to other alternative options with different regulations of prices. This manner, our company shall attract the lost customers, and it will place the competitors on the edge or defense of quitting the market.

It may result in a fast diffusion and adoption of the products of the company in the markets. If the products of the organization are reasonably priced, and of the same quality to the competitors, our goods will spread out into the markets and be bought by clients quickly.  This may create goodwill among the initial clients who buy the products because of the aggressive pricing and establish customer referrals as well as strong customer loyalty. Efficiencies are encouraged due to thinner margins of profits because of the dynamic pricing. Capabilities shall be required to sustain profitability. It could discourage rival companies from entering the markets.  In case there are high turnovers of products for distributors because of quick sales, it may assist establish enthusiasm for our products from the distributors of the products, like the wholesaler.

 

 

Procurement for Global Organizations Operating

Executive Summary

FedEx Express is a firm that ventures into transportation, information and logistics solutions services thus necessitating the need to have vibrant procurement measures in place. FedEx Corporation has six main operating companies including FedEx Express, FedEx Trade Networks, FedEx Freight, FedEx Ground, FedEx Custom Critical, and FedEx Services operations in different countries across the world. The procurement process remains to be a vital activity of the corporation to ensure that it achieves its business goals and objectives. FedEx Express incorporates various strategies that are meant to control the level of engagement with external suppliers who play an integral role towards the success of the company in the transport industry. FedEx employs the Center-led supply chain management (SCM) procurement model which has allowed for the centralization of the procurement teams from the different companies such that the procurement process is done centrally and using a standard approach. Its seven-step procurement model guides its procurement activities and the company is utilizing the E-procurement tool offered by the Ariba buyer system to facilitate the procurement steps. However, some significant procurement risks face this system and which need to be appropriately addressed if the company’s procurement processes are to be made sustainable. It is essential that the corporation considers the full integration of the steps into its technological system that will ensure that all the levels of operations are working in order. The implementations of an ERP based well as well as the E-commerce system is a crucial segment of the E-procurement system that will ensure that all the processes are effectively accomplished and the procurement process becomes sustainable.

Introduction

The performance of an organization depends on the kind of strategies put in place to govern the interaction amongst stakeholders. It is essential for an institution to evaluate the underlying factors that contribute to the growth of the company. The management team has the responsibility of examining different procurement structures and determines the most appropriate tool that can be embraced by the firm to yield positive outcomes. Improved structures contribute to successful operations where a company has control over its suppliers with the aim of managing risks associated with diversity. It is possible for the services to be affected if proper measures are not put in place to curb the challenges experienced when involving external parties in the supply chain. Identification of an organization helps in determining appropriate strategies that can be embraced during the procurement process to avoid unnecessary conflicts associated with a diversified business environment. Focusing on a specific firm is a critical step towards the accurate development of a practical framework that can be utilized by the company to manage the risks that hinder the positive growth of an organization when engaged in procurement activities.

Organization overview

Understanding the nature of the organization and its primary activities is a critical move towards enhanced analysis of the procurement practices embraced by the firm. Individuals make crucial decisions that are meant to affect the operations of a firm and transform its image by improving the kind of services offered to the customer. FedEx Express is a firm that ventures into transportation, information and logistics solutions services thus necessitating the need to have vibrant procurement measures in place. According to Lakew (2014), FedEx Corporation has six main operating companies including FedEx Express, FedEx Trade Networks, FedEx Freight, FedEx Ground, FedEx Custom Critical, and FedEx Services operations in different countries across the world that require improved services for effective communication to be realized in various sectors. Shipment of commodities from one destination to the other is an uphill task that cannot be achieved if the necessary precautions are not factored in the planning process.

FedEx Express incorporates various strategies that are meant to control the level of engagement with external suppliers who play an integral role towards the success of the company in the transport industry. The company has its focus on quality services that are offered through the employment of strategic measures when dealing with various suppliers in different sectors. Dealing with diversified customers is an essential exercise that shapes the direction of a firm because clients present different needs that must be addressed through the services offered by the company. A large number of employees at FedEx Express contribute to the outstanding services at the company that attracts more clients thus improving the economic standard of the institution as also noted by Nowak and Hough (2018). Venturing in global business presents different kinds of needs that are influenced by the foreign policies in different countries thus creating the need to appreciate diversity and have an all-inclusive model. The purpose of having such a procurement model is to ensure different needs are addressed using unique approaches that are applicable in diversified situations.

Literature Review

There is wide literature addressing the broader topic of procurement management and its various elements. The risk components remain to be a central issue that the various authors do not fail to address when discussing about procurement sustainability issues. The procurement process is considered the backbone of every business activity as it enables to establish the basis for business initiatives. It is what makes the marketplace actors relevant in the vicious cycle of business activities. As such, it is important to identify the various procurement aspects and how it affects business activities of companies. This section critically analyzes literature on the process of project risk management and the importance of sustainable procurement practices in relation to FedEx Company.

Project Risk Management

Understanding the kind of activities conducted by an organization is an essential move towards effective operations. The practice implies that appropriate strategies can be employed to improve the results obtained by the firm. Risks are typical in an environment where various parties are involved in the process of accomplishing a specific task as observed by Liu, Meng, and Fellows (2015). FedEx Express is an organization that focuses on diversity thus creating an avenue where numerous challenges can arise based on the mechanisms used by the company to deal with suppliers. Many organizations face problems due to poor planning when engaging in different business activities thus affecting the economic growth of a company and the society at large.

Managing risks is an essential practice that contributes to the outstanding performance witnessed amongst the established organizations. Cagliano, Grimaldi, and Rafele (2015) stated that an analysis of the underlying factors must be conducted for the management team to identify potential threats that can lead to negative impacts during the implementation of a project. Service delivery is a crucial business venture that determines the fate of a company where clients seek for quality services offered by organizations with an elaborate culture. Various stakeholders play a critical role towards the development of an organizational culture that can be used to attract clients hence necessitating the need to embrace collaborative functionalities with the aim of sharing information that can help towards positive growth of the company. Established procurement departments are designed to solve project risks by sourcing for service providers with relevant qualifications to supplement the activities conducted by the company.

Importance of procurement processes for FedEx

Procurement processes contribute to the performance of an institution because goods and services are sourced to enhance the image of the firm. It is difficult for an organization of FedEx caliber to achieve the set objectives without incorporating services from external players in the industry. Offering diversified business solutions to clients is a critical practice that calls for proper strategies to curb project risks associated with numerous conflicts that are caused by poor procurement approaches. According to Ruparathna, and Hewage (2015), managing project risks cannot be realized without evaluating the procurement practices embraced by FedEx Express because they influence the outcomes of the engagements pursued by the company when dealing with individuals from different regions.

Stiff competition in the transport industry is a critical factor that must be evaluated for organizations to understand the methods that can be applied to create a competitive advantage. FedEx requires improved procurement practice that can meet the anticipated standards in the sector to attract more customers on board that is motivated by improved quality of the services offered when shipping luggage from one destination to the other. Individuals prefer firms that are established through enhanced structures of procurement where transparency is paramount for the effective functioning of the company. Suppliers contribute to the growth of FedEx Company because most of the services are outsourced due to the fast-growing population that requires improved transport channels that can help in saving time and maximizing the level of interaction amongst various stakeholders. Patrucco, Walker, Luzzini, and Ronchi (2018) noted that managing supply chain is a critical exercise that involves multiple practices that are designed to evaluate the importance of a given player in the system with the aim of eliminating unrelated parties that have an adverse impact on the survival of the company. Various projects are affected by poor decisions made by an organization on procurement policies because the operations of the firm depend on the input strategies where the necessary resources are supplied by qualified companies. Employing unethical standards might hinder the realization of the set objectives because the quality of the services offered by FedEx will be compromised by malicious individuals within the supply chain thus ruining the reputation of the organization.

Working with suppliers can pose a critical threat to the success of a project because the challenges associated with diversity are transferred to consumers who end up changing their service provider. FedEx’s competitors are keen on maximizing the chances created when poor procurement approaches are used by the company because customers shift their base to an institution that is ready to meet the expectations. Preventing such scenarios calls for a proper analysis of the underlying factors that influence the level of engagement between the different stakeholders involved in the business activities. According to Amann, Roehrich, Eßig, and Harland (2014), improved procurement practices will enhance the performance of FedEx Express by increasing the number of customers who utilize the services offered by the company to ship their goods across the world. Incorporation of best practices when determining procurement procedures is the only option that the company has to meet the expectations when dealing with complex business environment associated with internationalization of services. A company that has its operations in different nations cannot ignore the role played by procurement exercises when engaging different service providers. Players in the supply chain have a significant role to play towards transforming the image of an organization because they complement the operations by filling the gaps in different sectors. The transportation industry has numerous areas that cannot be addressed by a single firm thus necessitating the need to embrace collaborative working where some firms are contracted to deliver services on behalf of the receiving company.

The success of global organizations is shaped by the kind of decisions made by the management teams because they have to consider external factors. Procurement is a critical transaction that influences the outcomes of engagement because FedEx works with different entities to achieve the set objectives. With diversity associated with the global markets, it is necessary for FedEx Express to incorporate best practices in procurement that are designed to monitor and control suppliers with the aim of meeting customers’ needs (Keränen, 2017). An improved procurement strategy contributes to a successful interaction when a company deals with suppliers because specific measures are developed to minimize expenses and increase the income. Companies that appreciate the role played by procurement processes realize their goals because suppliers are managed to reduce the project risks associated with diversified operations.

Procurement Practices at FedEx

Procurement strategies/Approaches

The transport, information and logistics solutions that FedEx Company is offering its customers is a complex combination of services that calls for effectiveness and efficiency in the sourcing and purchasing process as noted by Freight, Industrials and Year (2018). FedEx Corporation has six main operating companies including FedEx Express, FedEx Trade Networks, FedEx Freight, FedEx Ground, FedEx Custom Critical, and FedEx Services. Each of these has complex procurement processes that are all overseen by the FedEx Corporation, thus necessitating that there is an effective supply chain management. To ensure that all the available materials are accessed and effectively utilized to meet the customers’ specifications and needs, it is crucial that an effective procurement strategy is established within the organization. According to Smith and Offodile, (2014), over the years, FedEx has sought to unify its procurement processes for its six major operating companies under a center-led initiative. The initiative is developed to ensure that the Corporation effectively engages in centered procurement activities that help to attend to the needs of its different companies. The Center-led supply chain management (SCM) procurement model has allowed for the centralization of the procurement teams from the different companies such that the procurement process is done centrally and using a common approach. This has been an effort to ensure that this procurement goal is achieved and that the sourcing process is efficient such that the identified suppliers are confirmed to be up to the task. It also provides critical insights into ensuring that the sourcing and purchasing activities are appropriately conducted through contract establishment. In this case, the company has employed a seven-step procurement model that guides its procurement activities and the company is utilizing the E-procurement tool offered by the Ariba buyer system to facilitate the procurement steps. Wisner, Tan, and Leong, (2014) identified these steps as the principles of supply chain management.

Step 1 involves users providing their requisition for the item to the sourcing team who establish whether they need to develop a strategy around the sourcing of the item. The team employs a return-on-investment criterion to establish if the item will help the company achieves its business goals and objectives if a full-blown supplier evaluation and selection is conducted. In this case, this first step is crucial in determining the company outcomes regarding profit making and meetings its client needs (Christopher, 2016). If the spend is enormous and calls for supplier evaluation, the assessment will involve identifying the nature of the purchasing activity. On the other hand, if the purchasing is not large enough, the user is advised to make a simple purchasing order through the Ariba system

Based on the research findings in step I, if the spend is big, step 2 allows for the FedEx sourcing team to engage in the process of selecting a sourcing strategy. It involves critically analyzing the available information and making the decision on how to approach the market. Various considerations are weighed at this point including if it is appropriate to request a proposal, whether maintaining the existing relationships is viable or revisit the negotiations and have a whole new sourcing strategy.

In case the strategy goes beyond negotiations, the third step involves the team carrying out an in-depth research of the suppliers within the area. The focus is directed on the supplier qualification, their capability of satisfying the users’, their service aspects among other thresholds, to determine their capabilities of meeting the procurement needs of the company. The process is understood as the supplier portfolio analysis whereby a list of suppliers is developed whom the request for proposal (RFP) is sent to.

Step 4 involves revisiting the developed strategy by the team to identify any emerging issues that may necessitate the changes to the negotiation process. The team then develops the negations strategy that will guide the establishment of the contract with the identified supplier. It is at this point that a determination of whether to use a conventional or reverse auction RFPs is made. After the completion of this process, and the RFPs have been received, Step 5 is undertaken which involves the sourcing team conducting the supplier selection and negotiation processes. It is at this point that contractual agreements are reached and signed for the procurement activities to begin.

Step six is the integration of the supplier into FedEx’s E-procurement Ariba toolset for streamlining of the procurement activities. The team then identifies any possible integration conflicts that exist and resolve them to make the contract work. The final step is to benchmark the supplier’s market through monitoring its processes with the FedEx supplier scorecard system.

According to Keyes, (2016), adhering to these steps and having a leveraged sourcing and purchasing processes for all the FedEx family companies has allowed for obtaining a collective contract that caters for the needs of each of the companies. This has created effectiveness and efficiency concerning costs and time while improving the service delivery of the individual companies and the corporation at large. The Ariba toolset system of E-procurement is significantly facilitating the processes while ensuring that the best supplier is sourced who will help the company work towards achieving its business goals and objectives.

Evaluation of the FedEx Procurement Activities and System

The Center-led supply chain management (SCM) procurement model at FedEx has proved to be an appropriate approach that has significantly improved the corporation’s procurement activities (Poirier, 2016). The synergy created by the sourcing team which is developed by individuals from the different FedEx family company ensures that the procurement activities are expertly done. Also, the company gets to benefit significant due to the cost reduction as a result of having a centralized procurement process which helps reduce the amount of resources used. The model has also enabled for sourcing a single contractor whom it is quite easy to follow up with and ensure that they meet the company’s expectations. Integration with its E-procurement system has enabled for smooth implementation of the process while providing alternative approaches for simple products and services. Also, the paperless environment has allowed for significant efficiencies since business activities get to be transacted at a click of a button. All these have allowed for effective service delivery and most importantly the development of a sustainable

However, some significant procurement risks face this system and which need to be appropriately addressed if the company’s procurement processes are to be made sustainable (Evangelista, 2017). While FedEx has ensured that it achieves effective contract management through its Center-led supply chain management (SCM) procurement model, the different actors in the contractual agreement may in one or another fail to fulfill their obligations. This is a significant risk considering the centralized procurement strategy employed by FedEx Corporation. As such, it is vital for FedEx to have in place a strategic source framework that will enable it achieved procurement excellence and value growth.

Procurement Source Framework that can Enhance Services at FedEx

There are various strategies that FedEx can put in place to ensure that it enhances its procurement activities. First, it is crucial that FedEx make maximum use of its Ariba toolset E-procurement system. The Ariba toolset has proved to have an excellent potential for integrating the various procurement activities into a single process. The integration is beneficial to both the customer and the company users in terms of the efficiency that it creates. Fully embracing technology in the procurement process is easy and efficient and will result to significant benefits to the business. Looking at the seven-step procurement model, there is the insufficient integration of the E-procurement system which is more efficient. While conventional sourcing teams are still effective, they prove to be ineffective due to their labor-intensive and manual processes involved. The need to hire thousands of employees to undertake the procurement process is an extra cost burden for the company and significantly reduces the profit levels. It is essential that the corporation considers the full integration of the steps into its technological system that will ensure that all the levels of operations are working in order.

FedEx needs also to consider implementing an Enterprise Resource Planning (ERP) system. The implementations of an ERP based system will allow for the effective streamlining and aligning the procurement processes of the different FedEx companies under the corporation. Efficiency will be created by the system as it will help identify the procurement needs of each company and categorize them accordingly in a common database. Due to the system’s ability of accomplishing various computerized functions, details of the various companies can be retrieved and this will facilitate the process of contract establishment with the various suppliers. The ERP system will also allow for easy tracking of the supply process capturing what has been delivered in their exact quantities and help capture the payments made. The system is effective for time and cost saving

Adopting the E-commerce system is a crucial segment of the E-procurement system that will ensure business transactions between FedEx and its suppliers is made effective. Lindberg-Repo, 2017 noted that the business-to-business (B2B) e-commerce model significantly facilitates business processes allowing for transparency and efficiency between the business actors. In this case, FedEx will be in a position to closely interact with its single or multiple suppliers in a more efficient way. It is this efficiency that ensures that FedEx achieve sustainable procurement process in the long run.

Benefits of the framework to the Company

The suggested technological procurement framework involving E-procurement, Enterprise Resource Planning (ERP) system and E-commerce tools will provide the company with significant capabilities. The Ariba toolset, can integrate various supplier information categories that enable for the appropriate selection of the suppliers to meet the set corporation thresholds. The E-commerce technology will allow for easy and fast payments and delivery of products. The ERP system will help in meeting the specific needs of the different companies and ensure that all resources that are need are availed in time. The computing environments that the technologies provides allows for streamlining and automating the procurement process allowing for the identification of purchasing patterns and spend analysis. The effective integration of the internal and external commerce processes of the various market players improves operational efficiencies as well as real cost savings.

Conclusion

The development of a sustainable procurement system is the initial most crucial thing for company success. Sourcing and purchasing of company materials largely determine the profit levels emerging from the sale of goods and services and which directly translates to the organizational performance. FedEx needs to maximize on its E-procurement system if it is to appropriately manage the existing risks and develop sustainability in its procurements processes.

References

Amann, M., K. Roehrich, J., Eßig, M. and Harland, C., 2014. Driving sustainable supply chain

management in the public sector: The importance of public procurement in the European

Union. Supply Chain Management: An International Journal, 19(3), pp.351-366.

Cagliano, A.C., Grimaldi, S. and Rafele, C., 2015. Choosing project risk management

techniques. A theoretical framework. Journal of Risk Research, 18(2), pp.232-248.

Christopher, M., 2016. Logistics & supply chain management. Pearson UK.

Evangelista, P., 2017. Information and communication technologies: a key factor in freight

transport and logistics. In Training in Logistics and the Freight Transport Industry (pp.

29-50). Routledge.

Freight, I. A., Industrials, L. S., & Year, F., 2018. FEDEX CORP.

Keränen, O., 2017. Roles for developing public–private partnerships in centralized public

procurement. Industrial Marketing Management, 62, pp.199-210.

Keyes, J., 2016. Implementing the IT balanced scorecard: Aligning IT with corporate strategy.

Auerbach Publications.

Lakew, P.A., 2014. Economies of traffic density and scale in the integrated air cargo industry:

The cost structures of FedEx Express and UPS Airlines. Journal of Air Transport

Management, 35, pp.29-38.

Lindberg-Repo, K., 2017. Processes: the way forward. Strategic International Marketing: An

Advanced Perspective, p.185.

Liu, J., Meng, F. and Fellows, R., 2015. An exploratory study of understanding project risk

management from the perspective of national culture. International Journal of Project

Management, 33(3), pp.564-575.

Nowak, M. and Hough, L.S., 2018. The Package Express Industry: A Historical and Current

Perspective. In Trucking in the Age of Information (pp. 77-100). Routledge.

Patrucco, A.S., Walker, H., Luzzini, D. and Ronchi, S., 2018. Which shape fits best? Designing

the organizational form of local government procurement. Journal of Purchasing and

Supply Management.

Poirier, C.C., 2016. Using models to improve the supply chain. CRC Press.

Ruparathna, R. and Hewage, K., 2015. Sustainable procurement in the Canadian construction

industry: challenges and benefits. Canadian Journal of Civil Engineering, 42(6),

pp.417-426.

Smith, A. A., & Offodile, O. F., 2014. Green corporate initiatives: a case study of goods and

service design. International Journal of Logistics Systems and Management19(4), 417-

443.

Wisner, J. D., Tan, K. C., & Leong, G. K., 2014). Principles of supply chain management: A

balanced approach. Cengage Learning.

Project Risk and procurement

Executive summary

Every project has an associated risk that comes along with it. The possibility of the occurrence of adverse situations exists because the project may not take shape and direction in which it is intended. Project risk management is a crucial process that ensures that the project team identifies all the possible risk associated with the project and that the appropriate control strategies are put in place. The document management system (DMS) project is an internationally designed project that comes along with some significant risks. For effective management of project risks, it is crucial that the project manager has in place a risk management plan that allows for a continuous assessment of the project to determine the direction that it is taking. Some of the significant risks for the DMS project include; the ineffective customization of the system to fit the local needs of the business, failure to capture the critical concepts of the system, undervaluation of document preparation. Also, there may be a focus on financial benefits rather than the efficiency outcomes of it and the project may result in the establishment of a rigid system that is difficult to change when need be. `For effective management of the identified risks, models and theories of risk and procurement management will be utilized. The option theory and the prospect theories provide critical insights on the best approach to manage the risks identified. Also, the Boehm Model for risk management will be utilized to identify the approach to develop a risk management strategy.

Project Risk and Procurement

Every project has an associated risk that comes along with it. According to Hopkinson (2017), the possibility of the occurrence of adverse situations exists because the project may not take shape and direction in which it is intended. As such, the occurrence of unintended outcomes becomes a reality and this means that the project has to manage these occurrences adequately. It is crucial that both the project team and the project manager identify all the potential risks associated with a project so that it can put in place the appropriate mitigation measures to reduce its impacts into the business. Project risk management is a crucial process that ensures that the project team identifies all the possible risk associated with the project and that the appropriate control strategies are put in place. Project risk and procurement management is a process that involves various crucial steps that need to be implemented in a cycle if effective risk management is to be achieved. It is of critical importance that all the risks are identified in the initial stages of a project and similar weight directed to each of the risks. In this way, no risk will be overlooked and this will ensure that all the potential risks are put into check and appropriately managed. Timely management of risks as well as assigning the appropriate resources to manage risks ensures that the business maintains stability. Also, having a capable risk management team, regarding the knowledge and skills that they possess is crucial in ensuring that all the potential risks are appropriately handled.

Literature review

When conducting a project, it is vital for the project manager to evaluate the underlying factors that shape the fate of the project. Examining what previous scholars discovered in the same field is a critical step towards successful implementation of a project. A literature review will enhance the discussion by establishing the gaps left by other researchers thus boosting the outcomes of this study. Various articles will be reviewed to determine the credibility of the study by identifying the areas that need to be enhanced through further research. Various aspects will be evaluated with the aim of assessing the viability of the research exercise depending on what previous studies have established.

The risk from the viewpoint of a project manager

According to Rimal and Turner (2015), risk can be associated with the fear of unknown because of the outcomes anticipated when implementing a new project. The authors discovered that projects are faced with numerous challenges that could not be identified due to their complexity. Having such information is a critical practice that helps in prior planning to prevent potential risks from taking place within an organization when incorporating new technologies in the operations. Wynne (2016) also supported the idea presented by Rimal and Turner (2015) and attributed the presence of risks within an organization to the misunderstanding that is common amongst different stakeholders. It is essential for individuals to embrace effective communication when working on a project to avoid potential risks from taking place.

The articles have critical information that can help project managers towards making the necessary arrangements that are essential in managing risks. With such knowledge, the researcher will develop credible results because the process of analyzing data will be enhanced. However, the researchers did not handle the issue of dealing with foreign firms thus creating the necessity to conduct further research with the aim of establishing the situation when involving international firms. This research topic focuses on project risk and procurement where various aspects are examined to determine the overall impact of risk assessment.

Managing risk and the most appropriate stakeholders to manage it

Knuth, Kehl, Hulse, and Schmidt (2014) focused on the perception held by individuals about risks. The purpose of such study is to understand the causes of risks with the aim of developing an idea of how the menace can be managed when planning to implement a new project. For successful operations, risks must be identified and managed through the employment of appropriate measures that are meant to offer critical solutions. Ho, Zheng, Yildiz, and Talluri (2015) also noted that risk management is the only approach that can guarantee successful functionalities of an organization. When implementing a project involving an international team, it is essential for the project manager to evaluate the underlying factors that might contribute to the challenges associated with the project. The move is meant to facilitate the accurate formation of the management strategy to avoid downfalls of the project.

According to Bailey (2015), business continuity is a critical practice that must be considered when setting up a new project. The move promotes effective performance whereby the necessary strategies are incorporated in the operations to facilitate success. Relevant stakeholders must cooperate to ensure the required mechanisms are applied with the aim of preventing negative impacts on the project. The practice can be achieved through the assessment of different issues within an organization to determine the suitability of the project and potential threats. Researchers need essential information to offer guidelines on the current situation before pursuing a specific study to prevent repetition. Conducting literature review facilitates the process of acquiring background information and enhancing the collection of relevant resources that are critical during the study. These articles will form part of the useful sources of information that can help the researcher to develop viable recommendations due to the critical analysis of the situation as also recommended by Aven (2016).

Application of the Theory and Practice of Project Risk and Procurement Management towards developing insights and solving current problems

Theories enhance the decision-making process where specific events can be used to conclude some trends. Individuals engage in projects based on the anticipated outcomes thus necessitating the need to develop theories that can define the fate of an engagement. Prospect theory facilitates the process of creating an idea of what can be achieved at the end of a project thus determine the cause of events to promote the realization of positive results. Lam (2013) noted that individuals are motivated by the promising outcomes that are achieved from the project thus implying the need to streamline the operations with the aim of preventing negativity. The practice creates the need to evaluate different theories and establishing practical means that can be adopted by a firm to enhance the performance and attract more participants. Implementation of the DMS is a critical step towards eliminating paperwork thus requires a collaborative approach to ensure the project is adapted without facing numerous challenges.

Lee, Markowitz, Howe, Ko, and Leiserowitz (2015) supported the need for prediction because the practice contributes to the decision-making exercises that are meant to determine the fate of an organization. Predictions can be made based on various elements including the past situations, current position and the vision of an institution. Problem identification facilitates the development of appropriate strategy to handle the issue and enhance positive results. Rokonuzzaman and Paswan (2017) noted that investigations help in unraveling the critical problems that might hinder successful implementation of changes within a firm. It is through such evaluating both external and internal environment that the project manager predicts the potential outcomes and make the necessary arrangements to boost the results by applying best practices during the implementation process. The authors presented a critical argument that can help the researcher to develop insights on how to handle different cases when seeking to establish project risks and procurement practices.

Yang, Hsu, Sarker, and Lee (2017) also supported the initiative of understanding the critical issues on the ground before launching a project. The authors focused n the need for integrating the option theory in project risk management. They considered the quantification of the fair value of the risk mitigation measures through calculations. In this way, they explained that project managers would have an opportunity to select the most appropriate options for mitigating risks. The authors implied that managers have the responsibilities of evaluating the underlying factors that influence the outcomes of an operation. Research exercise involves data collection that might pose a critical challenge to the researcher if prior knowledge is not gained through an evaluation of existing literature. The use of different articles is an essential practice that enhances the results of a study because the researcher can incorporate various arguments when developing the conclusion of the exercise based on the information gathered from the field.

Reviewing different articles demonstrated a practice that is necessary for the promotion of effective operations. Project managers must work toward enhancing the outcomes of a given task by incorporating different strategies that can influence positive operations. As much as a lot of information has been developed by previous scholars, it is vital to note that the research topic focuses on project risk and procurement where an international team is involved. The authors of the previous articles focused on internal factors and recorded little information about the external forces that contribute to the growth of the company. The current study seeks to address critical issues that affect the implementation of new projects based on different factors thus necessitating the need to proceed with the research exercise to fill the gaps left by other researchers. General information has been discussed in the articles without focusing on a specific area that can help in streamlining the operations in different organizations thus creating the need to venture into a field that will help in shaping the process of working on a project within an institution.

Document Management System Project

The business environment is fast changing. The increasing competition has necessitated that businesses have an electronic system that facilitates the effective management of business documents (Zhang & Fan, 2014). With the highly diversified business activities, having a document management system (DMS) in place facilitates the realization of efficiencies and effectiveness in the business process. As a business seeking to develop a competitive edge in the operational market, it is essential to centrally store business documents so that business information can be easily accessed and modified. In this case, the company is seeking to adopt a document management system that is internationally developed and it is to be implemented by an international team within the next six months. It is important to understand that the suggested system is foreign and the implementers are equally foreign. This being the situation, the company is currently seeking to understand the basis for this particular system to capture its various critical aspects. The most important preparation is to have a project team in place that will take the lead in the orientation process of understanding how the system works. The team will closely work with the international team to implement the system in the business and to customize it such that it meets the specific needs of the business.

The concept of Risk in the Project

Because the DMS project being implemented is international, it needs to fit into the local context of the business appropriately. As much as numerous potential benefits come along with the system, there are equally significant potential risks that need to be identified as noted by Schwalbe, (2015). The project manager is who is in charge of the project is tasked with the responsibility of ensuring that the risk management process is appropriately undertaken. From a project manager’s perspective, a risk is any occurrence that will hinder the successful implementation of a project to influence the outcomes of the project such that it does not meet the project goals and objectives. For effective management of project risks, it is crucial that the project manager has in place a risk management plan that allows for a continuous assessment of the project to determine the direction that it is taking. In this case, the project manager takes the lead and seeks the support of the rest of the organizational team. The individual perceptions of the system among the organizational leadership and employees are crucial elements that determine how effective the system will work for the organization. The adoption of this new technology targets to enable the business to develop a competitive edge. In this case, the business needs to maximize on the benefits of the technology and minimize any adverse occurrences that it may bring about. It is important that various risk identification approaches, including brainstorming, checklists, interviews, fault tree analysis (FTA), and structured “what if” technique ()SWIFT are considered for identifying any potential risks (Kendrick, 2015).

Some potential risks are associated with the implementation of the DMS project and most especially because it is an international system. To ensure that the project succeeds it is important that these risks are appropriately handled at its early stages. The first significant risk of this project is the ineffective customization of the system to fit the local needs of the business. While some significant adjustments may be made, the system may not effectively suit the needs of the business and as it may be challenging to implement it appropriately.

Secondly, the task team responsible for maintaining the system may fail to capture the critical concepts of the system and this means that the sustainability of the system will be significantly compromised. The organization’s business processes will realize significant hitches that hinder the achievement of the business goals and objectives. The third is the undervaluing of document preparation. Considering the conventional processes involved in document preparation, using the digital system may compromise the quality of the documents developed. This means that while there is greater anticipation for accrued benefits, the opposite may occur such that the digital documents may not be as effective as the manual documents initially used. Also, while the traditional paper documentation is labor intensive and costly, the implementation of a DMS is perceived to bring about significant cost cuts. In this way, much focus may be directed to financial benefits rather than the efficiency outcomes of it. The goal or implementing this project is to realize business efficiencies, a shift in focus to realize financial savings indicates a change of focus. While financial saving is critical for business sustainability, it is considered a positive unintended income to the project. Another risk is the implementation of a rigid system that is difficult to change when need be. `Because the system is international, its customization process may be inadequate to meet the exact needs of the business.

Risk Analysis-How to Measure and Rank the Risks

The identified project risks all have significant relevance on how they will impact the ability of the business to achieve its goals and objectives. It is, thus, crucial to analyze each of the identified risks, measure and rank them (Acharya, Pedersen, Philippon, & Richardson, 2017). Risk analysis allows for the prioritization of what needs to be handled first to ensure that the project is not negatively impacted in any major ways. In this case, both the qualitative and quantitative approaches will be used to analyze the risks. The qualitative risk analysis will involve examining each of the identified risks, the probability of occurrence in the business and the possible consequences that it may bring about (Yoe, 2016). In this way, the all the organization employees will be involved in the risk analysis process such that they will be in a position to identify any possible risks that they may encounter. Risk analysis may also be done quantitatively whereby the organizational choices are identified. As much as these choices do not result to direct consequences, they play a significant role in influencing the direction that the project takes. The probabilities of these choices creating risks are established and possible consequences of the choices and decisions made. The application of risk management models and theories mainly facilitate the understanding of the nature of risks as well as the costs attached to these risks.

The option theory is an effective theory that aids in identifying the various options that can be used to mitigate potential risks. Once the prioritization of the risks through ranking them, the option theory allows for finding alternative options for managing them. Ranking of risks allows for making the right decisions about how to minimize the adverse outcomes of the project and maximize the positive outcomes. It also allows for determining the most appropriate strategies for handling the risks in place. For the DMS project, a range of organizational stakeholders will be involved in the analysis so that a more comprehensive mitigation strategy to the identified risks is developed.

How to Construct a Project Risk Management Strategy

After the risks have been identified and analyzed, the next critical step is to put in place adequate risk management strategies. There are various risk management strategies that project managers can employ to manage the identified risks. For the DMS project, the most effective strategy is the mitigation strategy which seeks to achieve two main things. First, mitigation seeks to reduce the likelihood of risk occurrence and secondly, to minimize the adverse impacts of risks if it occurs (Kerzner & Kerzner,2017). This option for the DMS project is guided by Boehm Model of risk management. Considering the nature of the identified risk for the DMS project, mitigation stands to be the most viable risk management strategy that will ensure that the appropriate contingency plans are established to counter the challenges. Being an international technology project, possibilities are that there will be significant risks that the project manager will have identified. It is also essential that the rest of the project team and users of the system are involved in identifying the most appropriate risk management strategy. The project manager needs to be strategic while undertaking this process to ensure that they capture and address all the critical issues identified. Once any gaps are left unresolved, there are possibilities that significant negative consequences will be realized.

The value of theories, concepts, and models in the practice of Project Risk and Procurement Management

Evaluating the impact of risks on a project is a critical step towards realizing the anticipated results. Individuals must appreciate the existence of different theories and models that help in determining the issues and how to handle such problems. Prior discovery facilitates the accurate formation of the mechanisms that can manage the problems identified and enhances the potential outcomes. Theories help towards the development of ideas on the possible course of action that can be taken by an individual when working on a project because numerous factors are evaluated to support the argument.

For the DMS project, the five significant risks that have been identified can be appropriately quantified and prioritized. The use of the option and prospect theories will allow for quantification and prioritization of the risks based on their impacts. Also Boehm Model will provide an appropriate strategy for managing the risk in a vicious cycle. In this way, the necessary resources, finances, people and time will be appropriately assigned to mitigate the risks (Harrison & Lock 2017). In this way, there are possibilities that the project will be appropriately shaped and aligned with its goals and objectives. In this way, the DMS project will be successfully implemented and achieve its intended purpose.

Conclusion

Project management is a critical exercise that calls for proper preparation for individuals to run successful tasks. It is the role of the project manager to conduct a thorough analysis of the potential risks with the aim of establishing the most appropriate strategy that can be employed to ensure new technology is adapted without any operational challenges. The use of theories is an essential step towards realizing the set objectives because they help in identifying the potential threats and development of the solution to the problems discovered by the project manager. Models are designed to offer guidelines on what should be done to handle the issue. Reviewing previous articles help the researcher in streamlining the research topic with the aim of filling the gaps realized after evaluating what other scholars wrote regarding the subject.

Risk Log Table

Hazard/Risk

Cause

Before Controls

Consequences

Response/Mitigation

After Controls

P

I

P ×I

P

I

P ×I

Mismatch of the system with needs

Ineffective customization of the system to fit the local needs of the business.

2

2

4

Specific goals of the project no met and the problem not solved.

Identify the specific features of the traditional documentation and use appropriate people to implement the project

1

2

2

The ineffective document management system

failure to capture the critical concepts of the system,

3

2

6

Inefficiencies of the system resulting in unmet needs

Involve the appropriate processes and the right people to develop the system

2

2

4

System rigidity

Establishment of a rigid system that is difficult to change when need be.

3

3

9

Failure of the project to meet its goals and Unmet business needs

Implement a flexible system and build capacities of people to manage the system

2

2

4

undervaluation of document preparation.

Not adhering to the appropriate process of document preparation

3

4

12

Failure to capture important business details. This decision-making process is impacted

Close monitoring to ensure that all the documentation processes are adhered to.

3

2

6

Lack of focus

Focus on financial benefits rather than the efficiency outcomes of it.

4

3

12

Having in place a system that does not add any value to the business.

Have clear goals about the system.

3

2

6

References

Acharya, V.V., Pedersen, L.H., Philippon, T. and Richardson, M., 2017. Measuring systemic

risk. The Review of Financial Studies30(1), pp.2-47.

Aven, T., 2016. Risk assessment and risk management: Review of recent advances on their

foundation. European Journal of Operational Research, 253(1), pp.1-13.

Bailey, D., 2015. Business continuity management into operational risk management:

Assimilation is imminent… resistance is futile!. Journal of business continuity &

emergency planning, 8(4), pp.290-294.

Ho, W., Zheng, T., Yildiz, H. and Talluri, S., 2015. Supply chain risk management: a literature

review. International Journal of Production Research, 53(16), pp.5031-5069.

Harrison, F. and Lock, D., 2017. Advanced project management: a structured approach.

Routledge.

Hopkinson, M., 2017. The project risk maturity model: Measuring and improving risk

management capability. Routledge.

Kendrick, T., 2015. Identifying and managing project risk: essential tools for failure-proofing

your project. AMACOM Div American Mgmt Assn. Schwalbe, K., 2015. Information

technology project management. Cengage Learning.

Kerzner, H. and Kerzner, H.R., 2017. Project management: a systems approach to planning,

scheduling, and controlling. John Wiley & Sons.

Knuth, D., Kehl, D., Hulse, L. and Schmidt, S., 2014. Risk Perception, Experience, and

Objective Risk: A CrossNational Study with European Emergency Survivors. Risk

analysis, 34(7), pp.1286-1298.

Lam, J., 2013. Operational Risk Management. Enterprise Risk Management: From Incentives to

Controls, Second Edition, pp.237-270.

Lee, T.M., Markowitz, E.M., Howe, P.D., Ko, C.Y. and Leiserowitz, A.A., 2015. Predictors of

public climate change awareness and risk perception around the world. Nature climate

change, 5(11), p.1014.

Rokonuzzaman, M. and Paswan, A., 2017. Effect of Product Return Policy on Consumer’s Risk

Perception, Store Image, and Store Patronage: A Causal Investigation. In Creating

Marketing Magic and Innovative Future Marketing Trends (pp. 779-779). Springer,

Cham.

Rimal, R.N. and Turner, M.M., 2015. The role of anxiety, risk perception, and efficacy beliefs.

Uncertainty, information management, and disclosure decisions: theories and

applications, 145.

Wynne, B., 2016. Misunderstood misunderstanding: Social identities and public uptake of

science. Public understanding of science.

Yang, S.O., Hsu, C., Sarker, S. and Lee, A.S., 2017. Enabling Effective Operational Risk

Management in a Financial Institution: An Action Research Study. Journal of

Management Information Systems, 34(3), pp.727-753.

Yoe, C., 2016. Principles of risk analysis: decision making under uncertainty. CRC press.

Zhang, Y. and Fan, Z.P., 2014. An optimization method for selecting project risk response

strategies. International Journal of Project Management32(3), pp.412-422.

Lightweight Visual Data Analysis on Mobile Devices

Providing Self-Monitoring Feedback

 

Author’s Name:

 

Institution:

 

Month and Year of Submission:

 

 

Abstract

Mobile devices can be made use of in so many other ways apart from passing information from one person to another through call or text messages. The devices have been created in ways that can capture and interpret data of various forms ranging from letters, numeric, videos, and images. The devices can be used in scanning to read codes on various items in supermarkets with the help of configured applications. However, in the present world, most people are suffering from various diseases that are caused by the poor eating habits. Some of the eating habits leading to poor health and suffering from diseases are lack of balanced diets that contain the right nutrients. The research will examine the level of calories intake by a person on every meal. Certain people may take more or less of calories depending on the best meals every person is served. To capture and make the best use of use of mobile phone devices, a prototype app will be made to capture selected information and thereafter a picture of the specified meal taken. The results are calculated based on the inspection of the picture in conjunction with the offered information. A majority of the population in the world poses Android mobile phones that can be used to collect such information and offer feedback to the patient or user without visiting a specialist for advice. “(Word count as specified in the Course Handbook: 14112 words).”

 

Acknowledgment

This dissertation has made me make a great milestone in my academics. The knowledge and information I have gathered and learned have been influenced by my participation in the research of this project. I offer gratitude to everyone who played a part in ensuring that I have gained the required knowledge.

My thanks go to my advisor, who advises me on the course of my research. The advisor has been available whenever I needed him during the research proposal preparation and during the research’s conceptualization. If it were not the efforts applied by my advisor, I would not have made to accomplish what I have achieved. The instructions, guidance, and recommendations offered enabled me to gather the right information in the preparation of my dissertation.

My gratitude also goes to my instructors, professors, and tutors who took part in my journey towards the accomplishment of my course. They all believed in my ability to make it through my studies and I now thank them all for the motivational guidance and encouragement. Finally, I thank my family, guardians, and my sponsors for facilitating my education to the level I am. They contributed as much as they could in terms of motivation and finances to see me through the system and achieve the best grade. I am so grateful, thank you all, and be blessed.

 

Contents

Abstract ii

Acknowledgment iii

List of Figures. vi

List of Tables. vii

Chapter One. 1

1.0 Introduction. 1

1.1 Background Information. 1

1.2 The Importance of Lightweight Visual Data Analysis on Mobile Devices. 2

1.3 The Theoretical basis for Visual Data Analysis. 3

1.4 Visual Data Analysis Statement. 5

1.6 The Relevance and Effectiveness of Lightweight Visual Data Analysis Study on Mobile Devices  6

1.7 Hypothesis and Research Questions. 6

1.7.1 Hypothesis. 6

1.7.2 Research Questions. 7

Chapter Two. 9

2.0 Literature Review.. 9

2.1 Background. 9

2.3 Theories on Visual Data Analysis. 10

2.3.1 Data Analysis Exploration. 10

2.3.2 Self-Monitoring Visual Data Analytics. 11

2.3.3 Monitoring Health using Analyzed Visual Data. 13

2.4 Data Exploration and Analysis Literature. 16

2.5 Visual Data Extraction for Analysis. 17

2.6 Visual Data Scalability Analysis Review.. 19

Chapter Three. 22

3.0 Methodology. 22

3.1 Visual Data Analysis Tools. 22

3.1.1 Tableau. 22

3.1.2 Data-Driven Documents (D3.js). 22

3.1.3 WebDataRocks. 24

3.1.4 BIRT.. 25

3.1.5 Google Charts. 25

3.1.6 Cytoscape.js. 25

3.2 Mobile Device Visualization Approaches. 26

3.2.1 Approaches of Compact Visualizations. 27

3.3 Cutting-edge Visualizations in lieu of Mobile Interaction. 28

3.4 Dietary Intake and Its Contribution to Weight Loss through Self-monitoring. 29

3.5 The range of Self-monitoring mobile device application. 30

3.6 An Investigation of the Effect of Mobile Devices on Self-monitoring. 31

3.7 Self-monitoring and Visual Data Analysis of Dietary Intake. 31

3.8 Self-monitoring and Visual Data Analysis of Physical Activity. 32

3.9 Self-monitoring and Visual Data Analysis of Weight. 32

Chapter Four. 34

4.0 Data Analysis and Discussion. 34

4.1 Quantitative Visual Data Analysis Using Regression Models. 34

4.2 Surveys on Human Effect Caused by Handheld Mobile Devices. 38

4.3 Discussion. 40

Chapter Five. 44

5.0 Conclusions and Further Research. 44

5.1 Conclusion. 44

5.2 Future Research. 45

References. 48

Bibliography. 51

Appendices. 52

 

 

List of Figures

Figure 3. 1 Bubble Chart. 23

Figure 3. 2 Scatterplot Chart. 23

Figure 4. 1 Graphical Representation of Results. 37

Figure 4. 2 Calories Counter. 37

Figure 4. 3 Quantity of hours spent on hand held devices (HHDs) 39


List of Tables

 

Table 4. 1 D3.JS vs. TABLEAU_ 24

Table 4. 2 Sample Population dataset_ 35

Table 4. 3 Descriptive Statistics_ 36

Table 4. 4 Summarized Demographic Data_ 38

Table 4. 5 Purpose of mobile device_ 39

Table 4. 6 Discomforts encountered_ 40

Table 4. 7 Tangling sensation_ 40

 

Chapter One

1.0 Introduction

1.1 Background Information

Health is a major concern that everyone in the society must take part in making it better. Starting with the nutrients of meals, the location where meals take place, and eating habits and mechanisms must be analyzed be analyzed using a developed app or database. The people need to be informed on the nutrients and other components found in food. People have the ability of taking or collecting visual data using mobile devices but nothing much has been done regarding the visual data analysis in form of generating feedback to users. The concept of getting recommendations and analysis reports from captured data in form of images can aid the larger population in saving time and other resources. Certain items like those that set dining tables and plates served with meals have always been a challenge in detecting the nutrients. Mobile phones and other wireless machines with sensors have the ability to scan and read hidden information through codes and other specified credentials as chosen by the users. Mobile phones have the same ability to make use of pictures and data to help calculate the level of calories consumed by a person on the meals served. A meal with various ingredients will be expected to have certain levels of calories based on the standard recipes for preparing such meals. For example, a person who has been served breakfast containing an egg, milk, and bread can be informed of the amount of calorie consumed. A lightweight data capture and visualization project will be done to gather information through text and pictures to analyze the number of calories consumed. Despite the pictures having various challenges such as darkness, too much light, or blurred, the information collected is analyzed hand in hand with the selected items purported to be in the pictured meal. The design of the app has no specific details to contain but the ability to analyze data using some of the keyed information related to taken pictures. Certain chronic diseases in the present age are caused by poor dieting and lack of exercise to burn the extra calories taken. Instead of keeping records on the served meal to inform nutritionists, individual people can have real-time tracking of the meals taken or even before eating to ensure the right quantities of calories have been served to avoid gaining weight and becoming obese. Overweight is a major cause of the major diseases affecting the world like high blood pressure, diabetes, and to some levels the causes of severe cancers. Every Android phone user has the ability to read and respond to various text information displayed on the screen of the phone. Making the app simple to collect a few related or present items found on the served meal then taking a picture will help to give an idea on the number of calories consumed.

1.2 The Importance of Lightweight Visual Data Analysis on Mobile Devices

We have a variety of foods eaten in the world that cannot be analyzed from one central place. Assessing on the nutrient components has been a major challenge. With a well-trained artificial intelligence program, certain nutrients can be read and analyzed from visuals taken in form of images or pictures. Due to the extended use of mobile devices to a great population in the world, performing analysis-using pictures can be aided with the available devices. Almost every person within a set diner or served meal owns a mobile device capable of collecting visual data. Instant responses and feedback offer efficient and reliable information that can be trusted than getting results from samples dropped in a laboratory. The generated results on an immediate effect can lead to trustworthiness and reliability of the self-monitored feedback. The study will be of significance to all mobile devices users because they will make good use of the gadgets in assessing the quality and value of meals consumed. Instead of taking pictures of junk and fast foods to post on social media, educative information can be generated from the pictures and guide towards the right and required diet intake. Upon understanding the number of calories contained in meals taken by a person every day, setting targets on what to feed on to regulate or balance the number of calories will lead to a healthy world.

The world is driven under the influence of technology and mobile devices have been including the abilities to capture information and processing according to the set commands. For example, data collections with applications that can recognize visuals make the exercise of learning and doing research easier and user-friendly. Getting trends from the set format and acceptable influxes create the desire of testing the available information.

The social sites have enabled people to pose and tell the world the meals they take but the ability to calculate and inform about nutrients content has been a challenge. For example, Instagram users have the habit of posting pictures and events attended. The meals served are taken pictures and posted on social media. Getting an application that can read, interpret, and analyze the information on such pictures can help such users in understanding the pictures and getting recommendations on what to do in making it right (Varona-Marin, Scott, & University of Waterloo. 2016).

The relationships between certain components on images have subtle correlation trends that must be understood by users of such information. Besides the decorations and fancy presentations of visuals, performing a visualization of data analysis helps to understand the deep meaning of how human eyes can view objects of things. For example, if an application can have the ability to discover an ingredient used on a meal and calculates the calorie content; such a system can be applied by the health ministries in ensuring that set standards of serving food are met in the hospitality industry.

The world does not want to waste time making research on certain topics such as maintaining healthy eating because the required efforts and resources cannot be met by everyone. Since technology has enabled the embedding of health and fitness applications on mobile devices, the people can feel at ease making use of applications that require a few minutes of their time even before engaging in an activity to take pictures and upload them into the application.

The information or feedback received from a visualized data analysis using mobile devices can be shared with other users as they can take screenshots or download results. Through the simple exercise of passing the results obtained to another person creates a platform for learning and getting more insights about topics affecting a majority of the population. The effect of having many calories in the body can facilitate weight gain and in some cases can lead to obesity (Varona-Marin, Scott, & University of Waterloo. 2016).

1.3 The Theoretical basis for Visual Data Analysis

Visual data analysis tools come with interfaces used to interact with users. The interface offers a screen to input data and takes pictures that can be processed and analyzed for the intended results. The screen offers choices to make while inserting the required information into an application. Depending on the complexity of the system developed, the input data or information may range from charts, line graphs, bar graphs, and feedback recommendations. The solutions offered by a visual data analysis application may include but not limited to the following:

  1. No or little coding during the preparation and implementation of the analysis
  2. The complexity of the analysis must be reduced and made simple to locate data from various sources
  3. The graphics used must be easily customized and user attractive to encourage regular use of such tools in data analysis
  4. All levels of underlying data must be drilled down with ease to enable the ability to receive detailed results
  5. Multiple views must be combined in a possible manner to create an at-a-glance general understanding of the entire process used in extracting, processing, analyzing, and interpreting results.

Human beings have huge volumes of data lacking understanding until devices and applications or systems are made to solve the problem. For example, mobile devices have the ability to be installed apps that can perform much of the data collection and analysis based on the set commands and instructions used while creating the databases. Creating an interactive dashboard where collection and analysis of information can take place must be created. The information at hand can never have meaning without processing and generating results that can be understood by everyone in the simplest meaning. Meals taken on a daily basis can be easy to consume but difficult to determine the goodness or badness in them after taking. A visualization is a helpful tool because it helps users of such tools in making the best use of the visualized items in life into account while gaining benefits. For example, the collection of calories information from served meals before taking them enables people to learn the number of calories in every meal hence measuring the intake of nutrients becomes a hobby. The results obtained are received and an application of common sense used to determine the remedy or corrective measure to the issued feedback. For example, when a system response informs that the consumed ingredient is in less quantity than the recommended, the suggestive measure is to ensure more is consumed to meet the required standard.

1.4 Visual Data Analysis Statement

Health organizations in the world have determined the use of mobile devices in determining the health status of people. Since a majority of the population own mobile devices, the collection of information from meals can be easier because the required details are limited to a few options and taking snapshots. The images and information provided can offer insights on the nutrients contained in every meal serving and recommend for a reduction or increase in the consumption of various portions. An average recommendation of calories intake for a normal woman is 2000 calories and for a man 2500 calories in a day. The prototype application is expected to generate the results of the number of calories taken in a day. When the results are below the recommended average consumption, the application advises for an increase while when the results are above the recommended average, a reduction in consumption is advised as the feedback.
1.5 Lightweight Visual Data Analysis on Mobile Devices Study Objectives

The population of the people is increasing at a high rate and healthcare providers are on the run doing research on emerging diseases. Technology has become a basic knowledge tool where anyone can make use of programmed systems by submitting the requested information and the interpretation of results made easier to understand. Making physical visits to professionals to help in the determination of nutrients and calories intake by people on daily meals can be a difficult exercise, but using handheld devices owned by the person to analyze himself or herself could save time and make the process faster. The study will examine the impact of implementing a personal visual data analysis used on mobile devices. The study objective is to evaluate how people make use of mobile devices with visualized data. The images and other visuals captured can be utilized to process information and offer detailed results. The target population is all users of mobile devices. A great percentage of the people in the world own Android or Smartphones capable of collecting visual data information. The allocated memory size can be installed applications ready to analyze captured visuals of meals and calculate the calories content on the meals served. The visuals can be analyzed based on shape, color, and pixels of pictures taken.

1.6 The Relevance and Effectiveness of Lightweight Visual Data Analysis Study on Mobile Devices

The effectiveness of the visualizations used by lightweight mobile devices has not been fully discovered but with the current research, some sort of analysis can be done to establish and discover the contents in a meal. Due to the lack of interventions on the visualizations, an assist of a few input data is required to facilitate the comparison and analysis of the items within a meal. Making use of data analysis concepts, collection and processing will be the major tasks of the lightweight visual data analysis app will have a primary objective of collecting information in form of pictures assisted with selected text predictions to generate the number of calories in food. The data processing criteria have been integrated with the information about the number of calories contained in different kinds of food. An analytical process takes the development of data from input records or historical data. Considering the reasons why many people purchase phones are due to communication and other luxury options like taking pictures and listening to music. Making simple and less time-consuming applications that can benefit users regarding health and other real-life aspects can enable knowledge enhancement towards enlightening people. Making such an application to require less memory space and generate results run through a programmed application can make work easier. It can take so much time and effort to develop such an application but once accepted in the market and running the target organizations, institutions, or the market will help in generating revenue to maintain and make updates.

1.7 Hypothesis and Research Questions

1.7.1 Hypothesis

Life challenges can be solved by mathematical techniques to derive comparisons from two datasets. Under the approach of using visualizations to determine the calories taken in by people from the served meals, the techniques used can be applied on the involved variables such as the number of humans and the type of food. Under the visualization of human nature, the aspects directly influencing the people have a great influence on the determination of steps taken in life. The sample size population and the type of food served comprise the variables needed to determine the expected outcome of the visualized data. The tests available for visualized data analysis on mobile devices include the relation, determination of the amount of certain nutrients intake, and other aspects of life like the kind of exercise engaged in keeping fit. Using the American Heritage Dictionary in making the definition of a hypothesis, “a hypothesis is a tentative explanation for an observation, phenomenon, or scientific problem that can be tested by further investigation.” Due to the ever-changing world with the new approaches in technology, health matters have posed a major challenge because of the development of fast foods served on the streets and institutions. The teenagers are at a greater risk of getting involved in certain behavioral traits that lead to surviving on junk food. The hypothesis in our case is the determination of the number of calories on the meals served at different times of the day compared to the recommended daily average consumption. The objective of the study is to determine the number of calories consumed by people while taking meals. It has been determined that the high or low number of calories intake is a result of the lack of knowledge on the number of calories contained in every meal. Creating a system or application that can work on mobile devices will enable people to discover the number and learn better ways of managing the calories intake. Apart from the number of calories taken every day, burning the bad calories can be a challenge due to the lack of an available incorporated tool for checking the number of calories intake and the number of calories burnt.

1.7.2 Research Questions

The research on lightweight visual data analysis on mobile devices will be evaluating the theoretical supporting information concerning lightweight visual data analysis tools on mobile devices. It will be targeting to get possible answers to the following questions:

  • Does the use of mobile devices aid in visual data collection?
  • Do handheld devices have the ability to process visual data and generate easy to understand results?
  • Will the use of lightweight visual data analysis tools on mobile devices make the understanding of healthy eating habits and create a routine to check the contents in meals before choosing what to eat?
  • Will the generated results offer satisfactory feedback to mobile devices users?

 

Chapter Two

2.0 Literature Review

2.1 Background

The term visualize has two distinct meanings. The first meaning being the formation of object images mentally in an internal and cognitive manner while the second meaning occurs when the eyes have the ability to make things visible through the concept of an external perpetual role. The two definitions of visualized images can be assumed to take the relationship between cognition and perception; it can be argued that the visualization of images keeps changing with time. The use of graphical demonstrations of data and concepts are being used to generate results from seen images. Similarly, the objectives of visualization have experienced tremendous changes just like the meaning of the term. The goals of visualization are limited to three as follows:

  1. Offering an exploratory analysis in a typically undirected search for the latest information from trends and structures without basing on any initial hypothesis
  2. Making of a confirmatory analysis in the description of the existing hypothesis by making goal-oriented examinations with the intention of rejecting or making confirmations
  3. Making presentations through an effective and efficient communication of ideas and facts fixed on a priori

The early uses of visuals like maps can date back the historic periods of various events, the major goal of offering such maps was to create a presentation of the past into the present. As much as presentations can offer accurate information, the clarity of images wears out with time and the existing knowledge fails to deliver the original message. Technology and computers brought about the rise of exploratory data analysis using graphical user interfaces. The statistical research was developed and made easier by the use of statistical tools developed and run on computers. Own research in the visualization of information started in the past two decades where the discipline was developed by individual people with the intention of making use of images to generate information. The search for interesting data with structures on various scenarios facilitated the move to performing the visual data analysis. The interaction of many structures with the mindset of what-if certain objects could have been seen in different ways has facilitated much in the search and development of visualized data analysis. Despite the search going on, certain interactions on the subject are in progress such as the linking of several visualizations, modifications of the run-time parameters, and data filtration to attain a standard form of incorporating such information in a customized manner.

2.3 Theories on Visual Data Analysis

2.3.1 Data Analysis Exploration

Visual data analytics is a science of reasoning that is facilitated by an interaction of visuals. Information gathering, processing, presentation, and making decisions are all involved in visual data analysis. The ultimate objective in visual data analysis is to ensure that the visible images have produced results on the intended expectations. For example, human health is highly determined by the diet taken. When images of dishes are taken and sent to a server for analysis and processing, the nutrient contents must be estimated and calculated. Despite the lack of perfection, the results obtained can offer a sense of direction towards eating habits and serving meals. Various sources have listed the number of calories contained in different foods. The essence of using visual analysis is to allow the application can sense and process pictures of meals with a little information about some of the ingredients provided before taking the pictures.

The field of handling visual data experiences challenges like blurry and unclear images. Pictures can contain similar images to those of other items hence leading to wrong feedback from the server. The process of communication from pictures to extracting information from them takes critical stages and various complex attributes must be integrated. The integration process must be keenly programmed to prevent random responses such some of the images can be taken containing many components that cannot come out clear when an analysis is performed (Singh, Dey, Ashour, & Santhi, 2017). The three main objectives of performing a visualization data analysis are: 1) for presentation purposes, 2) for confirmatory analysis reasons, and 3) for exploratory performance analysis. When conducting a visualization for presentation, presented facts are assigned a priori the techniques used for the appropriate presentation chosen depending on the user preference. When choosing a preference of the user, it provides a better ground for effective and efficient communication about the visualization analysis results.

Mobile devices have a heterogeneous life that is very complex in understanding. Huge data is collected with less potential of offering results that are well explained or analyzed with the current tools and apps. People like taking selfies and other photos while having fun and having a good time. In such occasions, different meals are served at different intervals that can lead to over-consumption of calories in the body. Every mobile phone user has different ways of using the device depending on the age and technicality of the life led. While collecting data and valuable information for the analysis and evaluation, the most interesting activities and patterns of life are captured to give the real character of a person. The people who are at the risk of becoming overweight or obese must be cautious about the level of calories intake on their meals (Tomar, 2017). For example, teenagers have a higher chance of becoming obese than the middle age population who are in a marriage and spend much of their time at home. The meals served are balanced and hygienic. The major aim is to determine the number of calories consumed by people than the quality or health benefits derived. Based on the average recommended calories intake per day, the lightweight visual data analysis mobile application must be in a position to extract useful information concerning quantity. While processing the information, short duration events must be taken into consideration since the occurrence is numerous. The use of visualized data analysis by mobile applications have the ability to highlight outliers, shows the trends, indicates clusters, and any gaps existing within the process can be exposed.

2.3.2 Self-Monitoring Visual Data Analytics

Self-Monitoring deal mostly with data collection and feedback, on the other hand, visual data analysis mostly deals with the feedback aspect. The main difference between self-monitoring and visual data analysis is the fact that self-monitoring tools may not always be represented visually, for example in the form of a text. However, visual data analysis offers an interactive visual representation of all the information collected during the self-monitoring process (Chittaro, 2006). The design and development of vibrant yet effective self-monitoring tools is a fast growing industry whereby Human Computer Interface (HCI) experts have integrated such technologies into mobile and handheld computing devices. Now with the push for mobile health applications, the demand for self-monitoring tools is at an all-time high. Such tools are aimed at supporting people’s health and wellness while providing the user with visual feedback to reflect upon his or her behavioral choices. With regard to overall self-monitoring application design. The focus is to reduce the data capture burden on the user by utilizing automated sensors already present in most mobile devices to collect user information (Chittaro, 2006). In a sense, one can say that a lightweight visual data analysis integrated into a mobile device collects statistics that are more accurate and provides the user with real-time feedback. In addition, the visual data analysis and feedback of self-monitoring have contributed in motivating people to capture more information and encourage self-reflection. Major reasons as to why a lightweight visual data analysis tool on mobile devices can get accurate results are due to the use of less memory and small amounts of data to provide results.

Perhaps the most challenging aspect of a self-monitoring tool during the design process is the continued tracking of a user’s statistics through automation (Chittaro, 2006). However, with complete automation, the purpose of self-monitoring is no longer present. Self-monitoring is only effective if users are aware of their behaviors and identifies how to change them. By automating the process, you isolate users from their data. In this regard, visual data analysis can be used to enhance the user’s awareness of their own behaviors and activities as well as providing useful feedback and motivate them to continue with self-monitoring (Choe, Lee, Munson, Pratt, & Klentz, 2013). Overall, more self-monitoring activities and continuous tracking increase the possibility of the creation of an appropriate feedback loop. Thus visual data analysis on mobile self-monitoring devices is a vital element that offers its users with valuable tools to continue tracking their behaviors.

Self-monitoring and visual data analysis tools provide feedback that is supportive of a user’s goals. Studies into the subject further revealed that visual feedback is even more effective when a person’s current state is included in their feedback (Choe, Lee, Munson, Pratt, & Klentz, 2013). Another investigation into the subject showed that when feedback is presented in a certain way it has the ability to alter people’s confidence levels, raising them to enable them to meet their goals. However, you have to take into consideration that varying configurations of feedback will elicit different results from different users. Therefore when planning the design to display self-monitoring feedback, designers need to take into consideration a couple of different designs and how they provoke a different reaction from the various users. This process helps designers to iterate between different design prototypes to select the best fit and one that supports users to meet their weight loss goals and objectives effectively (Kazdin, 1974).

2.3.3 Monitoring Health using Analyzed Visual Data

Obesity has become an increasing problem worldwide with areas such as the United States being the worst hit. According to the Centers for Disease Control (CDC), at least one-third of the United States, the adult populace is obese, that is approximately 35% of the entire inhabitants (U.S. Department of Health & Human Services, 2016). Obesity by itself is not an infection or disease by a gateway to problems such as stroke and heart diseases. A person is considered obese if their weight is more than the maximum weight limit for someone of a particular height. This is known as the body mass index or BMI. The body mass index is a person’s weight (KGs) divided by the square of their height (meters). The higher the BMI the higher the chances of being obese (U.S. Department of Health & Human Services, 2016). As such, a walk to your local chemist or pharmacy you are likely to find a wide array of drugs that offer weight loss solutions, however only a handful of such products provide people with all the necessary knowledge and equipment for a complete lifestyle change (United States Preventative Services Taskforce, 2012).

According to the USPSTF, an effective weight loss program ought to be comprehensive in order to be considered as behavioral intervention technology (United States Preventative Services Taskforce, 2012). Weight loss programs have typically been characterized as behavioral interventions that dictate reduced calorie intake and more activities that facilitate the use of stored energy within the body. Such programs involve setting goals and application of self-monitoring strategies. Through self-monitoring strategies, you can effectively your calorie intake as well as physical activities that help you use up stored energy; being conscious of your own behaviors. Self-monitoring can be defined as an individual strategy aimed at recording, analyzing, and providing useful feedback. The main objective of self-monitoring is to increase or decrease certain aspects of a person’s everyday life (Foster, Makris, & Bailer, 2005). However, in most cases, a person involved in self-monitoring aims to improve his or her individual functionality, academic capacity, behavior. Rather than focusing on the negative aspects, self-monitoring strategies are designed to develop an individual’s skills that would lead to the desired outcome.

According to (Rohrer, Cassidy, Dressel, & Cramer, 2008) obesity is a worldwide problem that should be treated with utmost care and provide patients with the tools and information necessary to monitor their behavioral choices and identify where to make amendments that would lead to effective weight loss. According to Pender, people are more likely to participate if they believe the activity is beneficial to them (Sakaraida, 2010). Therefore, the use of self-monitoring technologies to track one’s body weight, physical activities, and dietary consumption has become a common practice by health corporations and individuals. The increased awareness of these self-monitoring technologies is as a direct result of the need for weight loss programs that are simple and easy to follow through. However, understanding the best way to translate the information that is gathered by some of these self-monitoring technologies proves to be a challenge in the very least. Therefore, as part of behavioral weight loss technological interventions monitoring ones dietary intake is a key aspect of its success. Behavioral intervention technologies or BITs are applications running on common everyday use devices such as tablets, mobile phones sensors, and other mobile devices to support health improvement and wellness activities. From a theoretical standpoint self-monitoring, is the predecessor of self-evaluation, which eventually leads to reinforcement strategies for the changes achieved (Foster, Makris, & Bailer, 2005).

Self-monitoring is essential when trying come up with new behaviors to manage to weight loss goals that include paying specific attention to a specific aspect of a person’s activities and taking note of the main details of his or her behavior. For an effective implementation of self-monitoring strategies, the activities must be recorded with the conditions under which they take place and both their long-term and short-term effects on the subject (Bandura, 1998). A successful self-monitoring endeavor is partially dependent on the subject’s consistency and truthfulness in relation to the targeted behavior, such as calorie intake. Actually, between the years, 1985 and 1990 self-monitoring only referred to following paper diaries with diets written on them. During these years, scientists and dieticians discovered the direct co-relation between weight loss, calorie intake, and physical activity (Jakicic, 2002 Dec). Today, in addition to physical activities and following a diet, self-weighing has also been introduced as another self-monitoring component (Linde, Jeffery, French, Pronk, & Boyle, 2005 Dec).

Although self-monitoring has been highly regarded as the keystone of behavioral weight loss through behavioral intervention technologies there is a whole other side to self-monitoring that is often overlooked. Self-monitoring is only as important as the feedback delivered to its users. Current research has focused its efforts on food databases, intervention methods, and new ways of automatic self-monitoring data collection and analysis techniques. In addition to collecting and analyzing a person’s behavioral data, these technologies need to be able to present this information to the user in a simplified and easy to understand format. However, very little or no efforts have been applied in the research of visualized user feedback and its effectiveness. The norm is that most commercial applications offer simple 2-dimensional feedback with one common piece of information that through self-monitoring activity that is more physical and less calorie intake is best for weight loss. To some extent, this statement holds true, but for vigorous and intensive physical activities; the human body needs enough tie to recuperate and regain all the energy that was used up. When it comes to calorie intake, the situation is even more complicated. The amount of calories alone does not necessarily tell us what entails a balanced diet.

Therefore, there is the need for a more up to date feedback mechanism that allows for lightweight visual analysis of collected self-monitoring information. Information such as nutrients in meals, self-monitored activities, and eating motivators are considerably easier to analyze with visualizations on a mobile device (Fox & Duggan, 2012). It is easier for self-monitoring users to understand their behaviors from the data collected and learn how they can change it for the better, towards be in good health. Mobile devices, presumably smartphones have since become a common device and are widely accessible to many people worldwide. Smartphones have revolutionized the communication industry so mu that their primary purpose is not only for communication. Today smartphones and other mobile devices can be used to store applications for various uses. One such function is related to a person’s health. Such applications can be used for self-monitoring purposes and collect user data, providing them with real-time feedback. Consequently, through advancements in technology self-monitoring applications have developed greatly through empirically testing the overall effectiveness of real-time mobile devices (Fox & Duggan, 2012). Such a tool can be characterized by any number of mobile handheld devices such as smartphones, tablets, body monitors, and so many other devices that have overcome the implementation barrier for mobile self-monitoring devices. The basic operation of these mobile self-monitoring devices is described in three stages:

  • What people do (behavior patterns)
  • Why people act the way they do (psychosocial behavior patterns)
  • When do people act (triggers and behavior timing)

2.4 Data Exploration and Analysis Literature

Information collected from self-monitoring forms an important aspect for self-reflection. Nonetheless, most of the existing self-monitoring applications only have the capacity to show very limited interfaces for the analysis and exploration of data (Kazdin, 1974). With the introduction of mobile visual data analysis platforms, all the aggregated information collected over time provide valuable identity trends as well as averages of their behavioral highs and lows. All the aggregated data provides people with a point for comparing several data trends and identify whether it is positive or negative. In this case, visual data analysis could assist users to identify noticeable trends in their behaviors and activities in an interactive manner.

Most of the self-monitoring tools are either wearable sensors or smartphones. Some of these sensors come with small or no displays at all, which pose a unique problem of having to force the user to install additional hardware to be able to trach their behavior. To address this challenge, some health-conscious organizations have leveraged mobile devices with considerably larger displays such as smartphone interfaces.

Some of the feedback for self-monitoring applications utilizing mobile and hand-held devices is delayed to the user in real time whereas others have to wait until the data collection process is complete. Such discrepancies arise because of the device’s form factor and the nature of data being monitored. A clear example of a device offering real-time feedback include some types of pedometers such as Fitbit, that show a person’s total number of steps sporting a small inconspicuous display. Also, some smartphones also support the installation of pedometer applications that also include real-time feedback. However, other self-monitoring applications such as Jawbone Up Band do not support real-time feedback due to the lack of a display. Such devices incorporate the use of third-party applications to synchronize and view collected data. Real-time feedback comes in handy when one can be in a position to alter their behavior as compared to changing your habit after a long duration of collected data, for instance, hours, days, and weeks can pose a significant challenge to the user.

Among the core objectives of a self-monitoring application is to facilitate the continued tracking of an individual’s behavior. To achieve this objective is a hurdle that must be overcome to enjoy the full benefits of mobile devices fitted with self-monitoring applications. The main problem arises due to the forgetful nature of human beings. Visual reminders provide its users with prominent and effective ways that encourage self-monitoring users to reflect and explore their collected data. These visual reminders can be in the form of notifications on a user’s smartphone, alarms, and emails as well as so many other forms of reminders that are accessible to the user. Also, a widget with a visual summary of all the collected self-monitoring data could take the place of a quick access point and a starting point for data exploration.

2.5 Visual Data Extraction for Analysis

While focusing on the effect of self-monitoring on weight loss, it is measured using a wide array of approaches. Self-monitoring has been considered a favored choice among various clients seeking to engage in weight loss by limiting their daily calorie intake and increasing their daily physical activities. A proven technique has been the self-monitoring intervention through visual interaction with the application.

Regarding users’ data and associated feedback, there are predetermined interventions for particular self-monitoring activities and behaviors. A good example is tracking a person’s daily calorie intake to target a healthy lifestyle with a balanced diet focused on a wholesome nutritional plan is an indispensable aspect of healthy lifestyle choices (Liberati, et al., 2009). These interventions also promote the user’s wellness and reduce the risk of other major illnesses. Therefore, for the overall design of appropriate self-monitoring interventions, you have to take into consideration two major aspects:

  • The context of the collected information
  • The overall impression of the data

The context here refers to the values evaluated in relation to the user’s expectations and objectives. Context can be further subdivided into individual and normative content. The individual self-monitoring context is used to describe the user’s values from which collected values can be associated with regard to previously collected information, the baseline (Baker & Kirchenbaum, 1993). On the other hand, normative context can be used to characterize generalized values to which all the collected data values can be compared to help in the interception of collected data values such as a recommendation to complete about a thousand steps each day (Liberati, et al., 2009). The impression in the collected data can be because of two reasons, either impression tracked data, or impressions collected data. With regard to the impression of all the data collected, the input is usually manually entered and to some extent may be incorrect. A good example of tracked data includes the number of calories ingested in a day or the total amount of time you take to complete a certain activity. Conversely, according to (Baker & Kirchenbaum, 1993), contextual data is the imprecise information that users compare values when they are faced with the challenge of an impression of either manual or automatically collected information, such as the activity of coming up with the baseline data. As a result, the information is not well defined and the consequences of deviating from the plan are different and also not clear; for example, it is better to eat one more unit of a vegetable than to add on to the unit of oils and sugars.

2.6 Visual Data Scalability Analysis Review

The form of databases created in the highly advanced technology era has enabled better interactive models of performing visual approaches. Visual analytics is a very sensitive program given priority to handle data mining analytics with complex information. The challenges experienced in the ongoing research of visual analytics can be solved by scalability.

The amount of data determines the level of challenge experienced when handling the data. The user display device and other cognitive limitations concerning the hardware used by a researcher in collecting the data can pose s major challenge. The display elements can have an ability to display less information than the fed data in the system. Closure and continuity are a requirement in conveying specific perceptions of data. Visualized clustering of many elements of containing negative information creates a challenge in the determination or n achieving the intended results. When making use of clusters, doing too much plotting of information can make the results impossible to make judgments because a true distribution of data cannot be achieved. Humans feed on various meals during the day and such kinds of food contain different amounts of calories and other nutrients. When the meals are cooked and served, different images can be seen to come out depending on the process and style of cooking. The major determinant of the outcome image being the recipe followed and the level of expertise. The dimensions used while taking images can be a great contributor to the capability of visualization to be achieved (Kerren, Purchase, Ward, & Dagstuhl Seminar on Information Visualization– Multivariate Network Visualization. 2014). The number of elements in a displayed item contributes greatly towards the scalability of a display. The level of computations leads to the challenges experienced when handling visual information that is used to analyze situations. The example of meals can be having several different images that can lead to different results depending on the manner of cooking. However, the process of fetching the data can be supported with a little selection of the items in the meal taken. Some of the inherent limits to the display capacity of displayed elements are determined by the scatterplots and coordinate matrices. When selecting the data, the user must ensure that every aspect of the selection criteria chosen gas been applied to the required level. During the computation of the visualizations, the complexity of the used algorithms must be core to the computer science topic.

Some of the algorithms can include quadratic efforts. For example, when visualizing many items an interactive experience may be encountered while certain items can take long hours to make updates and process the input data. When dealing with such complex and numerous information, certain approaches can be used like hardware-oriented to aid in the distribution and parallelization of stored data. Some of the items can require software-oriented computational skills to execute commands and filter information. Some of the software-oriented techniques include regression analysis and data sampling. For example, when handling multi-resolution items, the approach of filtering the information must be applied to help separate the parts with high resolutions from the lowly saturated resolutions. Both the computational and visual limitations do not dependent on themselves but they get help from one another. Certain approaches are used to perform the activities of analyzing the data with set specific limits in a simultaneous manner. When large samples are used, the aspect of computation efforts is limited to allow the use of visuals to take effect in handling such complex data.

A display scalability is the ability a data analysis tool has in ensuring the visualization is effective when sent from one person to another using digital assistance to be displayed on wall-sized screens. When we consider the current visualization systems, all designed to be viewed on desktop displays that limit usage on small screens like mobile devices.

A subset of data can be used to offer simple visualizations using the scalability of information. The rate of dynamic change of data and presentation facility scale targeting a specific audience is the general perspective of information scalability. The ability to handle heterogeneous data in several ways is also a form of information scalability.

The human scalability concept deals with the number of human power required to solve an analytical problem. Humans are used mainly to set graceful scaling measures taken from single users to be used in a collaborative setting.

When dealing with algorithms, automation of information can fail to be faster when handling an increasing dataset going parallel with the set computing infrastructure. It is projected in the next 15 years there will be effective and efficient ways of handling visual information that in the present. Computations have challenges such as the use of input of wrong formulae but the extraction and processing of visual objects can be a solution to most of the data.

 

Chapter Three

3.0 Methodology

3.1 Visual Data Analysis Tools

3.1.1 Tableau

Tableau offers business intelligence solutions that are preferred to be visualization tools capable of managing visualized interactions within a short period aided by dragging and dropping options. Some of the options data is delivered include heat maps, bubble charts, maps, scatter plots, and pie charts from the information extracted from the dashboard and other diversified datasets. Some of the actions done by tableau include drilling, aggregations, or highlighting in forms of charts helping users to create visualizations that illuminate huge data. When data is sorted in excel, text files, and CSV files can be recognized. A database connector and expertise are needed to extract data from databases. Performance of definitions and calculations can be done by tableau tool. A selection of tableau software offers includes Tableau Server, Tableau Public, Tableau Desktop, or Tableau Mobile to choose from depending on the available resources available with the user (Morton, Balazinska, Grossman, & Mackinlay, 2014). Visualizations can be shared among different users with the Tableau Server because access is restricted to different views that underlies when applying the data by users. The manner a user clicks a mouse within the screen.

3.1.2 Data-Driven Documents (D3.js)

In the recent decades, the use of data-driven documents proved to be among the highest rated in the utilization of visual data analysis where the user data could be sourced free. The best tool for handling a data-driven way of manipulating and visualizing DOM elements must think of using the D3.js tool. The D3.js is a JavaScript element known for data-driven documents makes use of in the manipulation of documents where the provision of data is provided by the researcher (Nair, Shetty, & Shetty, 2016). The raw or unprocessed data is analyzed and the output generated with the help of SVG, HTML, and CSS. Some of the images produced by the D3.js tool are as shown below.

 

Figure 3. 1 Bubble Chart

Figure 3. 2 Scatterplot Chart

 

 

Table 4. 1 D3.JS vs. TABLEAU

D3.js Tableau
The JavaScript Library found in the D3.js does not have a visualized software It is a business intelligence software that includes a visualized software package
It is open source and free Very expensive and proprietary
The learning is very difficult because of the heavy coding required An easy to learn a tool that uses a drag and drop option
The time required to use tableau ranges from hours to days The time required to use D3.js is minutes of navigating and dropping items on the dashboard
The output from a D3.js is done in a scalable vector graphics (SVG) The visualization format used in tableau can allow exports in EMF, JPEG, BMF, and PNG formats.
When the data used in gigabytes, there is so much struggle experienced during the processing period. A tableau is an able tool that can identify measures and dimensions. Tableau also has the ability to handle gigabytes of data.

 

3.1.3 WebDataRocks

Getting an efficient tool for performing a data analysis with a combination of visualization becomes a challenge for many researchers. Using WebDataRocks helps in enhancing accuracy, offering efficient display, and making data extracted from JSON and CSV files. WebDataRocks is a form of pivot tables operated by the web that allows users to visualize given information in a better way. The aggregate and insight received from a WebDataRocks tool occur in real time. The major use of a WebDataRocks is to sort, give an average, and count the given records issued from the primary source to offering a summarized data in form of a grid. The best web reported is experienced by the delivery of reports from the JavaScript tool. WebDataRocks offers the best reports that can be used universally with many users from different industries (Huerta-Cepas, Serra, & Bork, 2016). The data used to generate the reports must be provided by the user depending on the topic of study or the location of the population sample. The most achieved advantage of making use of the WebDataRoks is the ability to provide numerous data analysis and examination features making the web reports to users easier to interpret and understand the generated results. The WebDataRocks does not need to be deeply understood with the latest high technology in the industry. Once the users are able to load and run JSON or CSV files the start of reports generation can take place. The dicing and slicing of data are done by dragging and dropping fields, drilling them down, filtering, and sorting. The WebDataRocks tool can be integrated with the Angular framework.

3.1.4 BIRT

BIRT is a tool used for the creation and generation of visualized reports that have the capability of being embedded in other web-based applications. BIRT is a free tool found on the open source learning materials that researchers can make use while performing experiments on data visualization techniques. BIR offers the support of Java EE and Java applications in special ways that can help to generate quick and detailed reports from the input data. Unlike other visual data analysis tools, BIRT is built with two major components that create a difference. The components are a runtime element that is able to generate the designs that can easily be deployed in any Java-enabled environment and a visual report designer that helps to create different designs according to the tool requirements and ability (Huerta-Cepas, Serra, & Bork, 2016). The two listed components are the core principles that help BIRT work well. However, the tool is able to allow a charting integration engine to be installed into BIRT designer. The data input to a BIRT visual data analysis tool must be generated from specific sources that are compatible with the BIRT data analysis tool. The acceptable sources for BIRT data information include POJOs, Web Services, JDO data stores, XML, JFire Scripting Objects, and SQL databases.

3.1.5 Google Charts

Google has been a very powerful tool in data analytics, visualization, and reporting. The same applies when working with Google Charts. The tool is far much better beyond being effective. The tool is free among the open source items and designed in a simple way of working that cannot give users a difficult time to use. The form of gallery generated by Google Charts is rich in customization of results based on the user preferences. The controls give multiple controls that can display dynamic data and support the portability and compatibility of cross-browsers (Huerta-Cepas, Serra, & Bork, 2016). With the latest technology trends, Google Charts are popular in the market for visualizing data using online tools.

3.1.6 Cytoscape.js

Cytoscape.js is another free or open source tool for data analysis that can perform visual data analysis. The language used in the tool is JavaScript. The tool is able to provide a library of visualization and graph analysis theories. Cytoscape.js is highly rated among the best tools for with high efficiency in the market in processing and manipulation of data into interactively displayed graphs (Franz, Lopes, Huck, Dong, Sumer, & Bader, 2015). The Cytoscape.js can also be integrated into an app of choice by the user.

3.2 Mobile Device Visualization Approaches

Visualizing information is a technique used when the alternative automation tools for data analysis become complicated and generated results fail to work as expected. The use of numeric data creates decision-making opinions that can be interpreted from the results that can help in determining various aspects. The use of data visualization highlights crucial information while hiding non-required credentials. When the users of the analyzed data have a direct exploration with the information, the findings are achieved faster with high confidence levels. The mobile technology is an available tool with everyone currently in the world that can capture data in various forms. Taking pictures and recording data being the primary aspects. Using the data collected, created applications can be used to perform a data analysis. The activities engaged with mobile devices require less engagement time and the generation of results must be produced almost immediately. Tasks requiring long periods of performing the analysis can be well done using a desktop but mobile devices can be well used to monitor instant activities such as the collection of information from served meals or the number of steps made by a person within a day. The extension of users’ decisions when and where they are needed has been facilitated by the presence of mobile devices. A contemporary system has been made possible by the mobile components. While handling daily tasks and fulfilling assigned duties, mobile phones can be used to run various applications and generate results without affecting the time needed in other activities. The same case such as when a person received an urgent call and responds to it, the examination or analysis of visual data can be done within the minimal time that cannot hinder certain activities from taking place. The technical limitations of mobile devices applications have rapidly aided the development of mobile activities and tasks performance in a significant way. The specific features that have made mobile devices have an upper hand in the examination of data are the screen resolutions that are extended to 1920 x 1080 pixels.

Very intuitive and technical interaction methods are required to supplement the soft touch screen keys. The small spaces provided by the mobile devices must be maximized with visualizations with compact results. The mobility of the devices challenges the designing of mobile interactions. The loudness and other lightening usage context make computers more stable than mobile devices. The advantage of physical rooms comes when recurring activities but mobile devices are the best when working in highly variable conditions. The variable conditions are characterized by implications with graphics that can be perceived in dark and light conditions. The different changes in the environmental conditions must be taken into consideration. The convenience, ease of use, and availability make mobile devices a better gadget in performing data analysis. The same manner some of the certain activities having a routine, mobile devices can be included in the routine activities such as being constantly connected and offering supportive information effectively. There exist embedded sensors on mobile devices that are not present in desktops. Some of the sensors include pedometers, physiological sensors, geographical positioning, light, accelerators, and proximity.

3.2.1 Approaches of Compact Visualizations

The act of visualization works on how the space on the screen is utilized. To perform better exploration of huge and large documents in an effective way, desktops are used to offer different perspectives of views. The summary provided gives a faster facilitation of the required access to content. Various navigation and presentation techniques have been devoted to working with 2D data and extra-large documents used by desktop systems. The difference in the visualization can be detected from the summary produced. To make the visualization process simple, sort of traditional solutions have been set to counter the problem and they include:

  • Detail and overview approaches
  • Information page restructuring
  • Zooming and panning/scrolling techniques
  • Context & focus approaches
  • Off-screen objects or contextual visualizations

However, information space restructuring applies universally on both mobile and desktop applications. A manual form of designing web pages to fit every device targeted enables better use of data approach. Apart from manual designing, reformatting automatically offers a possible solution. The original layout can be preserved by the compression of the available space into a thumbnail.

3.3 Cutting-edge Visualizations in lieu of Mobile Interaction

In this section, we get a better understanding of mobile device abilities that can be made use to enhance and improve the current performance of mobile devices. The manner of performing tasks can be facilitated and sped up by highly interactive and easy to use systems are implemented. Due to the limited space on mobile devices, solid methods of data extraction and processing are needed. The use of mobile devices can be set to collect a user’s eating behavior and perform an analysis. A behavior ring is produced with solid visualization where timing can be tracked.

Despite the known advantages of self-monitoring devices, most of these devices have a high rate of non-wear, which ultimately leads to misleading feedback due to missing data. Also, these self-monitoring devices are usually unable to accurately capture user behavior. However, it has been noted that most users of self-monitoring devices have at least one smartphone or access to one with the capacity to operate as a self-monitoring tool (Finkelstein, Trogdon, Cohen, & Dietz, 2009). This paper aims to investigate the design, development, implementation, and use of mobile self-monitoring applications. In addition, how it has affected aspects such as the collection and analysis of data as well as how feedback is relayed to the user (Finkelstein, Trogdon, Cohen, & Dietz, 2009).

The paper will use the example of a smartphone application or ‘app’ as is commonly referred to, that combines self-monitoring strategies using sensors inbuilt within the phone and a user’s goals and objectives. Therefore, the self-monitoring application either makes use of a smartphone’s built-in sensors to manually or automatically collect a user’s activity and behaviors. Also in addition to the likelihood of non-wear being drastically reduced to levels that are more manageable. After the data collection is complete, the smartphone has the capacity to trigger an analysis and evaluation of the data and provide feedback in real time (Webber, Tate, & Quintiliani, 2008). At the end of the day, the self-monitoring application will allow users to interact and review all the collected data and effectively alter their behavior and physical activity, such as reducing calorie intake and by how much.

3.4 Dietary Intake and Its Contribution to Weight Loss through Self-monitoring

Obesity is a major contributor to the continued rise in spending on healthcare products. According to the Annual medical spending attributable to obesity: Payer-and Service-specific estimates between the years of 2006 and 2008, the annual spending on weight loss products shot from forty million dollars to well over a hundred and forty million dollars. It has been predicted that should this trend continue unhindered by the year 2023 more than 80% of U.S citizens will be overweight (US Department of Agriculture and U.S. Department of Health and Human Services., 2010. Dec). As a result, of most self-monitored weight loss programs are aimed at altering the activities and behaviors of users for a better lifestyle. Some of these interventions include:

  • A protracted and continuous intervention contact,
  • Self-monitoring strategies
  • A sense of accountability
  • Motivational cross-examination
  • Regular self-assessment
  • Consistent physical activity

The American Dietetic Association (ADA) has concluded that by realizing a negative energy balance is one of the most significant factors that directly affect the extent and overall rate of weight loss over a given period of time (US Department of Agriculture and U.S. Department of Health and Human Services., 2010. Dec). Some of the strategies that are used to ensure the achievement of a negative energy balance include:

  • Calorie counting,
  • Modifying macronutrient composition
  • Manipulating a meal’s energy density
  • Opting for low-calorie diets

Also through the reduction of dietetic fat and starches is a more applicable technique that can be used to reduce the total calorie intake by between 500 to 1000 kilocalories (kcal) each day, which directly translates into one to two pounds per week aiming to lose weight. It has also been noted that however small a drop in calorie intake when it is combined with an increase in a user’s physical activity, will result in a clearly visible weight loss. This technique has a higher chance of implementation and an even greater possibility of being sustainable in the end.

The U.S. Department of Agriculture has also taken the initiative to help people better understand and interpret dietary guides and self-monitoring strategies, and in essence, it promotes a diet discouraging high fat intake and encourages participants to eat fruits and vegetables. The general communication you can pick from all this is that an appropriate nutritional habit has the ability to promote a person’s health and diminish the risk of getting a chronic disease. According to the guidelines put across by the Dietary Approaches to Stop Hypertension (DASH), the typical diet should be a representation of healthy living by promoting the ingestion of vegetables and fruits, low-fat milk and milk products, whole grains, reduced consumption of meat. By the end of it all, you will have cut your total calorie intake by approximately 30% (United States Preventative Services Taskforce, 2012). The proposed diet plan in conjunction with self-monitoring has proved to be effective. More so, the pursuit of weight loss has been managed by self-monitoring tools own and managed by an individual that help in tracking various activities.

3.5 The range of Self-monitoring mobile device application

Many self-monitoring applications are developed to cater to the target market of both health care providers and individuals at home. These types of applications vary in complexity and application but in general, they are more adapted for everyday use by untrained individuals. However, some of these applications may exhibit some forms of common medical terminologies and utilities that may or not be understood by non-health professionals. In a study published in 2012 (Conn, 2012 Dec), a group of individuals both with and without healthcare backing indicated that the most common category of mobile applications are clinical decision-support tools and medical education resources both of which border closely to self-monitoring especially when applied to weight loss and self-diagnostic scenarios (Bandura, 1998). This category of applications can be regarded as patient-centered applications each capable of accomplishing a wide array of functions such as manages chronic illness, lifestyle intervention, and self-diagnosis. These types of applications incorporate a variety of functions and usually log user data such as daily eating habits, total calorie intake, compliance with medical procedures, individual physical activity and their behavior in a mobile offline/online database.

A large number of applications in this category have focused on exercises and weight loss. A smartphones inbuilt camera, which has become somewhat of an industry standard has the capability to capture images and record a photographic diary of daily calorie intake with regard to food and drinks ingested.  Most self-monitoring applications utilizing the portability and ease of access of smartphones today are used to track calorie intake and weight loss goals by objectively tracking daily physical activity from sensors within the phone such as pedometers and accelerometers (Conn, 2012 Dec).

3.6 An Investigation of the Effect of Mobile Devices on Self-monitoring

Among the most noteworthy recurring hurdles and challenges in the self-monitoring application field is the continued need for effective and consistent processes for measuring user behavior and physical activity for the purpose of scrutiny and intervention for continued health benefits (Finkelstein, Trogdon, Cohen, & Dietz, 2009). There is growing apprehension over the justification of mobile self-monitoring applications as a direct result of the possibility of errors, misinterpretation, and bias. Especially among the youth who have adopted the use of “objective” measures of physical activity and behavior, such as heart rate monitors, accelerometers, and global positioning systems (GPS), all of which are available on their smartphones, are more likely to misinterpret the information gathered. For example, more than a few researchers have found inconsistencies in their levels of physical activity and associated behavioral patterns when comparing self-monitoring feedback using objective assessment methods (US Department of Agriculture and U.S. Department of Health and Human Services., 2010. Dec). However, presently most of these objective self-monitoring activities are being deployed in large-scale as a means of investigating an adolescent’s behavior and physical activity with the promise of obtaining a more truthful valuation of one’s physical activity and behavior.

3.7 Self-monitoring and Visual Data Analysis of Dietary Intake

Almost all the studies that were aimed at investigating nutritional self-monitoring have identified a substantial relation between self-monitoring strategies and weight loss goals. Some studies use paper diaries and others used some deviation of the paper diary and a supplementary smartphone application (Jakicic, 2002 Dec). The quantification and subsequent analysis of nutritional self-monitoring vary. For instance, with regard to some studies, the users were trained to take note of their physical activity, disposition, eating environment, water intake, and other important behavioral variables that directly affect the total calorie intake daily. The amount of self-monitoring applied in these participants comprised of recording and analyzing only 5 variables. Therefore, when assessing the result of self-monitoring strategies, the researchers used these dietary variables to determine a monitoring index (Jakicic, 2002 Dec). However, in these self-monitoring endeavors, those with complete records are able to see a considerably larger difference in terms of losing weight. Moreover, the results were even higher for users with higher self-monitoring fulfillment described by (Yon, Johnson, Harvey-Berino, Gold, & Howard, 2007) who assessed the benefits of self-monitoring is directly related to the frequency of self-monitoring.

The introduction of smartphone technology, with access to the Internet, for use in self-monitoring, has come up with a new generation of strategies. It has since been reported that the number of self-monitoring programs delivered over the Internet was more so associated with weight loss. Yon related the results of a weight loss study that incorporates smartphone technology in self-monitoring to a prior study that used traditional self-monitoring and found significant variances in the total weight lost in direct relation to the user’s adherence (Yon, Johnson, Harvey-Berino, Gold, & Howard, 2007).

3.8 Self-monitoring and Visual Data Analysis of Physical Activity

Regarding the utilization of traditional diaries to record one’s physical activity and behavior only, a few of these endeavors scrutinized the part played by self-monitoring strategies in when the question of weight loss is raised. The users who participated in the self-motivation and assessment were asked to take note of their daily routine in relation to the type of exercise and their duration. Self-monitoring was accurately defined by the total time taken attending to certain physical activities were completed. The results clearly depicted that those participants adhered to the consistency of self-monitors of their behaviors and physical activities not only achieved significantly better results but also faced fewer challenges along the way.

3.9 Self-monitoring and Visual Data Analysis of Weight

Only recently have researchers backed weight loss self-monitoring as a plausible solution that could possibly increase a users’ cognizance of their weight, its relation to the number of calories ingested, and the physical activities required achieving the desired weight loss goals. One researcher directed expressive subsidiary studies to two continuing tests by using a solitary item survey to evaluate the occurrence of self-weighing among the participants of the trial (US Department of Agriculture and U.S. Department of Health and Human Services., 2010. Dec). As a control measure, a trial of weight gain deterrence and a weight loss trial were administered at three points. In the weight gain experiment, users who weighed themselves daily often claimed to have lost weight, and the less frequent incidents of self-weighing were linked instances of weight gain. Nonetheless, concerning daily, weekly, and monthly self-monitoring, instances of weighing were related to loss of weight and even more, recurrent weighing was symbolic of an even greater achievement with a 24-month self-monitoring weight loss (Foster, Makris, & Bailer, 2005).

Two completely random experiments aimed at addressing instances of daily self-weighing within a self-monitoring and regulatory framework as the main strategy to determine the effect of weighing among these three groups of people: they include face-to-face, internet-based, and an 18-month trial fixated on the deterrence of regaining weight. The results showed that both groups gradually increased their day-to-day weighing, which was expressively connected with a lower possibility of regaining weight. Therefore, the user’s observance to weighing progressively lessens over time in both groups (Choe, Lee, Munson, Pratt, & Klentz, 2013). When comparing the results of user behavior and self-regulation strategies within a timed clinical experiment. Members of one group were asked to record their daily weight using a digital weighing scale. ON the other hand members in the second group got a modified behavioral treatment strategy whereby they were directed not to record their weight until the 11th week and then to subsequently weigh themselves weekly.

After 20 weeks, the frequency of weighing was expressively associated with weight loss; however, there was no noteworthy variance in the weight loss margins for both groups. The use of an electronic scale offered its users feedback in the form of objective data to ratify the self-reported devotion to weighing, which was usually over 95% of the days the experiment was live.

Chapter Four

4.0 Data Analysis and Discussion

4.1 Quantitative Visual Data Analysis Using Regression Models

Computation and interactive data visualizing analysis methods offer very useful integration and study of data in a successful manner. When doing an interactive data analysis workflows and explorations, the quantitative means are used in externalizing the outcome of input data from the research. The use of quantitative data analysis was considered and the regression models applied in the generation and interpretation of the results. The given dataset by users is used to generate results from the models selected. A numeric coefficient benefit is also provided to the user for a better understanding of the outcome. Depending on the availability of the models chosen, subsequent workflow steps can be used in the fulfillment of the tasks. A reconstruction of complex models is then performed using the inversion of applied local models to break down the complexity involved. A sample population of 30 respondents was examined and an average calorie count from the meals served was recorded. The performance of a statistical analysis was taken and the results indicated that a majority of the people had a lot of calories intake during the lunchtime meal. The results from the sampled population revealed that 40% of the calories consumed in a day came from the meal taken at lunchtime. Using the quantitative descriptive statistical results, the mean calorie consumption per meal indicated: breakfast 286.5 calories, lunch 530.5 calories, dinner 430.3667 calories, and snacks 88.1333 calories. The data collected from the respondents and statistical results are as indicated below. The recommended calorie intake for men should be 2500 while for women 200 calories. However, the research indicated instances, where every responded, was consuming fewer calories than the recommended amount. The reason why such little calorie intake could sustain a healthy living could be a possibility of lack of exercise. The calories consumed are not burnt hence maintaining the calorie count balance.

 

 

Table 4. 2 Sample Population dataset

Calories Counter
Meals
Respondents Breakfast Lunch Dinner Snacks Total
1 300 528 412 92 1333
2 297 538 428 87 1352
3 312 546 450 81 1392
4 289 512 467 93 1365
5 309 516 432 91 1353
6 311 572 419 83 1391
7 299 564 463 89 1422
8 293 513 443 81 1338
9 271 502 427 82 1291
10 315 512 429 92 1358
11 272 519 434 90 1326
12 269 540 439 87 1347
13 298 532 417 82 1342
14 291 529 438 93 1365
15 289 516 428 94 1342
16 268 503 421 86 1294
17 301 537 431 87 1373
18 250 512 409 88 1277
19 267 517 416 92 1311
20 238 523 422 94 1297
21 289 561 420 97 1388
22 273 551 431 89 1366
23 248 539 426 85 1321
24 315 542 428 94 1403
25 264 526 423 84 1322
26 319 537 432 86 1400
27 283 520 430 82 1342
28 271 526 431 83 1339
29 294 548 429 91 1391
30 300 534 436 89 1389
Total 8595 15915 12911 2644
Average 1351

Table 4. 3 Descriptive Statistics

Breakfast   Lunch   Dinner   Snacks  
Mean 286.5 Mean 530.5 Mean 430.3667 Mean 88.1333
Standard Error 3.8856 Standard Error 3.2241 Standard Error 2.3525 Standard Error 0.8329
Median 290 Median 528.5 Median 429 Median 88.5
Mode 289 Mode 512 Mode 428 Mode 92
Standard Deviation 21.2826 Standard Deviation 17.6591 Standard Deviation 12.8854 Standard Deviation 4.5617
Sample Variance 452.9483 Sample Variance 311.8448 Sample Variance 166.0333 Sample Variance 20.8092
Kurtosis -0.4521 Kurtosis -0.1578 Kurtosis 2.1413 Kurtosis -1.0917
Skewness -0.4850 Skewness 0.5299 Skewness 1.1909 Skewness -0.0200
Range 81 Range 70 Range 58 Range 16
Minimum 238 Minimum 502 Minimum 409 Minimum 81
Maximum 319 Maximum 572 Maximum 467 Maximum 97
Sum 8595 Sum 15915 Sum 12911 Sum 2644
Count 30 Count 30 Count 30 Count 30

Figure 4. 1 Graphical Representation of Results

Figure 4. 2 Calories Counter

 

4.2 Surveys on Human Effect Caused by Handheld Mobile Devices

The handheld mobile devices are modified to make the use of the latest computing capabilities like video, e-commerce, internet communication, and information retrieval. The incorporation of such features has made the mobile devices so popular among the people with many expected tasks to be completed using the devices. Every social site has developed apps compatible with the devices to allow everyone who owns them to get access to the world platforms. With the many activities and tasks that can be completed with mobile devices, the time spent on the gadgets has an impact on the health of users. Upon a survey done during the research of this project, a survey was conducted that was taken part by 5 females and 5 males. The participation was conducted by answering a set of four prepared questions. The region surveyed was in the New York City. A confidentiality preservation was agreed to take place and the list of participant names was issued with alphabetical letters to enhance the privacy and confidentiality of personal information. The summary of responses given was consolidated as below:

Table 4. 4 Summarized Demographic Data

Respondent Age Hours spent on a mobile device
A 56 1
B 39 3
D 36 3
E 15 4
F 27 4
I 42 4
C 39 5
J 35 5
G 28 7
H 18 12

 

5.1.2.1 Questions

How long and how often do you make use of your mobile device?

The spread period the respondents spend on their mobile devices ranges between one hour and twelve hours a week. The collected responses were presented in the chart below.

Figure 4. 3 Quantity of hours spent on hand held devices (HHDs)

 

What is the purpose of your mobile device: entertainment, conversation, texting?

The listed three purposes are used by the respondents but during the interview, we only required the feedback to be given based on prioritized activities. A great number of participants use their mobile devices for texting, others use for making conversations, and the least number of respondents use the mobile devices for entertainment.

Table 4. 5 Purpose of mobile device

Respondent Age Purpose
A 56 For conversation
I 42 For conversation
J 35 For conversation
E 15 For entertainment
H 18 For entertainment
B 39 For texting
D 36 For texting
F 27 For texting
C 39 For texting
G 28 For texting

What discomforts have you encountered on your hand or shoulder while using mobile devices?

Among the ten respondents, nine of them confirmed to have experienced hand and shoulder discomfort after a long period of making use of the mobile devices.

Table 4. 6 Discomforts encountered

Respondent Age Hours Purpose Hand and shoulder discomfort
A 56 1 For conversation No
B 39 3 For texting Yes
D 36 3 For texting Yes
E 15 4 For entertainment Yes
F 27 4 For texting Yes
I 42 4 For conversation Yes
C 39 5 For texting Yes
J 35 5 For conversation Yes
G 28 7 For texting Yes
H 18 12 For entertainment Yes

Do you ever experience any tangling sensation in your hand or shoulder when you are done typing on a mobile phone device?

Seven out of the ten respondents acknowledged having experienced a tangling sensation in their hand or shoulder after typing for long.

Table 4. 7 Tangling sensation

Respondent Age Hours Purpose Hand and shoulder discomfort Tangling sensation experience
A 56 1 For conversation No No
B 39 3 For texting Yes Yes
D 36 3 For texting Yes Yes
E 15 4 For entertainment Yes No
F 27 4 For texting Yes Yes
I 42 4 For conversation Yes No
C 39 5 For texting Yes Yes
J 35 5 For conversation Yes Yes
G 28 7 For texting Yes Yes
H 18 12 For entertainment Yes Yes

 

4.3 Discussion

The users of the developed applications have no idea on the methodologies and programs used in creating the systems but the major objective is to simplify the processing of information. The ability of mobile devices to read or scan codes on products in malls can be applied to generate information from images. However, the images have no hidden codes thus creating a difficult situation in extracting information from pictures. Despite the high sensitivity and data extraction processes needed, the advancement of used resources must be thought about to help the project become a success. The mobile devices can be utilized to offer detailed information but the effects are severe. The analysis done above indicates that tangling experiences and discomfort are a major problem experienced by users. The positions that are taken while using the devices for long can lead to unwanted feelings such as neck pain. The investigations and deductions portrayed in this paper are because of the application of scientific methods with regard to self-monitoring. Self-monitoring is a strategy that is applied to enable the user to be more aware of target behavioral goals and physical activities. Each investigation into the field of self-monitoring is seen to be consistent and usually allied with weight loss programs (Sakaraida, 2010). For the reason that of the inconsistency of self-monitoring diets and physical activities were quantified; it was however not conceivable to account for the exact occurrence of self-monitoring strategies that translated to a discernable difference in the overall weight outcomes. Again, owing to the weighing investigation, there are substantial weight loss variations between users who weigh themselves on a daily basis, those who weigh themselves on a weekly basis, and those of whom weigh less frequently. This outcome was confirmed through the systematic review of the self-monitoring literature.

Most of these participants who were included in the studies through the implementation of a descriptive design to display the procedural flaws through clinical trials of self-monitoring strategies. These restrictions have greatly influenced the level of indication and thus affected the deductions and successive endorsements that can be received from this study of self-monitoring strategies on mobile devices. The sturdiest opinion was the reliable sustenance of self-monitoring strategies on various smartphone platforms in the experiments that covered the self-monitoring review period. Conversely, for the reason, that of the consistency of the participating samples, the generalization of collected information and visual feedback based on findings and data analysis was restricted to overweight or obese people (US Department of Agriculture and U.S. Department of Health and Human Services., 2010. Dec). This signifies among the main limitations in the consideration of the capability, observance, and influence of self-monitoring strategies, visual data analysis, and presentation of real-time feedback over the vast smartphone network worldwide. This also dictates on areas that future research endeavors into self-monitoring strategies should to focus.

An added organizational flaw of the assessment of strategies was the valuation of self-monitoring and allowing for enough room for a quantification bias. With the exemption of the initial studies, that used analysts to evaluate traditional paper diaries on the various self-monitoring activities (such as foods eaten, time these foods were eaten, the quantity of food and even the total calorie intake) and a current experiment that is aimed at defining self-monitoring observance. Though none of the investigations reported such criteria, whereby they assessed self-monitoring strategies and how they identified the completeness of their paper diaries or logs. The investigations showed that participants described taking note of calorie intake on certain days when the paper diary was never even opened and no documentation was taken due to the misconceptions of a self-reported paper diary log (Bandura, 1998). In addition, the use of technology and mobile electronic devices has greatly improved the self-monitoring conduct provided an objective authentication of any self-reported behaviors of interest.

Even though there were procedural restrictions of the investigation of self-monitoring strategies used worldwide, there was sufficient supporting evidence and proof for the reliable and noteworthy constructive interdependence that exists between a self-monitoring the dietary plan, behaviors, physical activity, weight loss programs and their subsequent successful outcomes associated to weight management practices. The investigation carried out on self-monitoring strategies acknowledged several loopholes, including the ideal occurrence and period a self-monitoring diet and physical activity plan, is effective.

The survey was done on mobile device usage and meals consumption have revealed that people make use of mobile devices all the time and meals are taken every day at different intervals. The two activities recur in the lives of people hence finding a way to analyze them and without imposing restrictions can make it easier to understand, the challenges faced in the world regarding health issues. Most of the chronic diseases have been managed by the development of simple mobile gadgets that can be operated by patients or an individual in performing self-tests. An example of such an analysis tool is the kit made for diabetic patients. The sugar levels can be determined by patients and the recommended medication taken or get a quick communication done to healthcare providers.

The examined situations were to provide evidence on the manner information could be analyzed using various tools. Apart from the use of the quantitative technique, a qualitative analytical method of analyzing visual data using mobile devices can be applied. When implementing the qualitative approach, the following aspects must be taken into consideration:

  • Classification field codes and codes, interview scripts, or observations by the use of inferring images that are intended to determine what is significant.
  • Performing an examination of aforesaid categories of information to identify existing relationships between the analyzed items
  • Making of explicit differences, patterns, and commodities
  • A formalization of theoretical constructs that help to make inferences from related cases in time and place.

 

 

Chapter Five

5.0 Conclusions and Further Research

5.1 Conclusion

The research performed on the ability of mobile devices such as Android phones having the ability to perform various tasks reveals that more can be done with the simple devices. When most of the devices users do not understand the much such phones can do, it is high time for the manufacturers to include useful applications such as self-monitoring tools that can advise on nutrition, and other health-related aspects from the food consumed. In the present time, people with internet-enabled mobile phone devices have mobile wallet payment accounts that allow them to make payments for the items purchased. In the process of making payment for the items bought, the information about the purchased food can be input into the devices and pictures of the same taken for analysis before eating. Upon the various investigations performed on the ability of mobile devices to collect and analyze data, the research on the analyzation of meals using pictures has not been completed and more information and tests are put into action to ensure that the ability of pictures to extract the required and exact details about ingredients can be possible. The major challenge facing the project of visualized data analysis on mobile devices is the outcome of different meals in different forms depending on the recipes followed. Apart from the food recipes, the cooking methods can contribute to different outcomes of the same food cooked by different people. When the result is different, the pictures taken for analysis will different hence giving wrong results. As a prototype form of analysis, the generation of results is currently aided by the use of manually keyed in information that can help the programmed application to compute the number of calories in a specific meal. When the used ingredients are submitted into the application and finalized by taking a picture, eh information contributes much to the determination of the required outcome. The input of such information has been simplified by creating a list of ingredients that can be used in cooking various meals, so the person performing the calorie count calculation will take little time to select the items from a list then quickly take pictures of the meal.

A major challenge facing the effective use of mobile devices is the size of the screen and the ever-changing user input interface. The amount of information expected to be displayed on the mobile device screen is much more than the screen can accommodate. For example, the display of graphs and other statistical representations can be ineffective due to the limited space on the screen where some information might be invisible. The consideration of the screen size opens the discussion about the challenge most people are facing with eye complications due to extended hours f working on computers and laptops screens. Due to the increase of eye problems, many people cannot be comfortable to make use of the small-sized mobile devices in analyzing food data taken from multiples of images and selected data input. The screen is a major hindrance from the success of viewing results on the screens of mobile devices, a major analyzing tool and knowledge must be innovated to counter the challenge.

5.2 Future Research

While working on means of bettering the system or program used to generate results on mobile devices, simple visualization tools must be used than reports or responses. The current stage of giving feedback on the input information does not satisfy the trend in which the analyzed data flows. Representations such as graphs and charts can perform better because they do not require long sentence explanations to offer a recommendation, but the trend of such graphs and charts can be quick and easy to understand. For example, when bar graphs are used, the shortest indicates a less consumption of a product but a long bar indicates too much of something. The future works can concentrate on making the captured data come out as charts, graphs and other forms of visuals that can be seen and interpreted without having to be explained. Visuals are easy and quick in conveying concepts in a non-biased manner because even the illiterate can be able to understand what diagrams or graph mean when displayed on the screen. Apart from offering the details about the subject matter, data visualization has the ability to perform many things when well implemented with proper standards of displaying captured information. Some of the things data visualization can do when the required improvements and upgrades are made include;

  • The prediction of revenue for businesses, institutions, and organizations following the graphical representations from the past periods
  • The clarification of factors affecting certain behaviors including customers, politicians, doctors, nutritionists, and many other fields provided the required data is provided and well executed by the set programs on the installed applications.

The visuals to display must be well decided according to age and preference. Various age groups need to view the results presented by the visualization analysis tools in different ways and presentations. To enable consistent use of the applications among various groups of people, the determination of outcome must be done based on the age limit. For example, children must be included charts and graphs that can show on the screen and sing songs of praise to anyone who eats healthy.

Visual data analysis requires groundwork that can offer guidelines on the manner of presenting feedback. Since the project is a new technology in the market that will enable the capture of pictures and generation of feedback by extracting information from images, there must be clear steps towards such an achievement to help the project become a success. Solid data is a basic requirement in the processing and analyzing of information in offering the needs of the audience. The various aspects of consideration when determining the organization of data visualization technology include:

  1. Have an understanding of what is to be visualized. The little information required to offer guidance in the generation of the intended feedback must be supplied into the system.
  2. Understand and know the form of information processing by various audiences. The elderly people do not bother the color outcome but the message contained in the response given by such applications.
  • The visuals used must be easy and simple to understand the information issued. Creating complex results can demoralize users from taking part in the use of such tools for data visualization because of the challenges experienced in trying to understand the results.
  1. The data analyzed must be understood by the application users to avoid using the wrong data sources. For example, the cardinality and size of the collected data must be limited to generate values in at most two columns due to the small size of mobile devices screen.

Instead of using mobile devices to take pictures of objects and items without getting results from the contents of images, the research will aid in getting a better understanding of the environment and products used on a daily basis. Fitness applications have been used to encourage people to exercise and count track the milestones taken each day. The same is assumed to take place with the visualization of events and activities. Despite the entire idea revolving around in the food sector, future improvements and advancements can be made to include tracking of various activities using images. Technology is a master in the simplification of things and the creation of gateways towards achieving a better world through innovation. Traditionally, pictures were considered to be drawn by artists on paper, but with time, digital photos can be taken and stored for a longer period. The idea of using visuals in capturing and processing data has a great impact on the lifestyle of the people because most of the things are better explained through mages than information or data. For example, explaining the contents of three plates of food all with different kinds can be a challenge but a picture can explain well and offer firsthand information because the information cannot be altered as it happens with recorded data figures. Therefore, a lightweight visual data analysis performed on mobile devices can create a big difference in the lives of people.

References

Baker, R. C., & Kirchenbaum, D. S. (1993). Self-monitoring may be necessary for successful weight control. Behavioral Theory, 24:377–394.

Bandura, A. (1998). Health Promotion from the Perspective of Social Cognitive. Psychological Health, 13, 623–649.

Chittaro, L. (2006). Visualizing information on mobile devices. IEEE Computer, 39, 40–45.

Choe, E. K., Lee, B., Munson, S., Pratt, W., & Klentz, J. A. (2013). Persuasive performance feedback. The effect of framing on self-efficacy. AMIA Annual Symposium Proceedings, (pp. 825–833).

Conn, J. (2012 Dec). Most-healthful applications. Modern Healthcare, 10;42(50): 30-2.

Finkelstein, E. A., Trogdon, J., Cohen, J., & Dietz, W. (2009). Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff (Millwood), 28: w822 – w831.

Foster, G. D., Makris, A. P., & Bailer, B. A. (2005). Behavioral treatment of obesity. American Journal of Clinical Nutrition, 82, 230S–235S.

Fox, S., & Duggan, M. (2012). Mobile health has found its market: Smartphone owners. Retrieved from Pew Research Center, Pew Internet and American Life Project: http://pewinternet.org/Reports/2012/Mobile-Health/Key-Findings.aspx

Franz, M., Lopes, C. T., Huck, G., Dong, Y., Sumer, O., & Bader, G. D. (2015). Cytoscape. js: a graph theory library for visualization and analysis. Bioinformatics32(2), 309-311.

Huerta-Cepas, J., Serra, F., & Bork, P. (2016). ETE 3: Reconstruction, analysis, and visualization of phylogenomic data. Molecular biology and evolution33(6), 1635-1638.

Jakicic, J. M. (2002 Dec). The role of physical activity in the prevention and treatment of body weight gain in adults. Journal of Nutrition, 132(12): 3826S-3829S.

Jones, N., Furlanetto, D. L., Jackson, J. A., & Kinn, S. (2007). An investigation of obese adults’ views of the outcomes of dietary treatment. J Hum Nutr Diet., 20: 486–494.

Kazdin, A. E. (1974). Reactive self-monitoring: The effects of response desirability, goal setting, and feedback. Journal of Consulting and Clinical Psychology, 42(5), 704-716.

Kerren, A., Purchase, H. C., Ward, M., & Dagstuhl Seminar on Information Visualization– Multivariate Network Visualization. (2014). Multivariate network visualization: Dagstuhl Seminar #13201, Dagstuhl Castle, Germany, May 12-17, 2013 : revised discussions.

Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P., . . . Devereaux, P. J. (2009). The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration.

Linde, J. A., Jeffery, R. W., French, S. A., Pronk, N. P., & Boyle, R. (2005 Dec). Self-weighing in weight gain prevention and weight loss trials. Annals of Behavioral Medicine, 30(3): 210-6.

Morton, K., Balazinska, M., Grossman, D., & Mackinlay, J. (2014). Support the data enthusiast: Challenges for next-generation data-analysis systems. Proceedings of the VLDB Endowment7(6), 453-456.

Nair, L., Shetty, S., & Shetty, S. (2016). Interactive visual analytics on Big Data: Tableau vs D3. js. Journal of e-Learning and Knowledge Society12(4).

Rohrer, J. E., Cassidy, H. D., Dressel, D., & Cramer, B. (2008). The effectiveness of a structured intensive weight loss program using health educators. Disease Management and Health Outcomes., 16, 449-454.

Sakaraida, T. J. (2010). Health promotion model. Mosby Elsevier.

Singh, A., Dey, N., Ashour, A., & Santhi, V. (2017). Web semantics for textual and visual information retrieval.

Tomar, G. (2017). The human element of big data: Issues, analytics, and performance.

U.S. Department of Health & Human Services. (2016, June 16). Defining Adult Overweight and Obesity. Retrieved from Centers for Disease Control and Prevention: https://www.cdc.gov/obesity/adult/defining.html

United States Preventative Services Taskforce. (2012). Screening for and management of obesity in adults. U.S. preventive services task force recommendation statement.

US Department of Agriculture and U.S. Department of Health and Human Services. (2010. Dec ). Dietary Guidelines for Americans. 7th Edition. Washington, DC: U.S. Government Printing Office.

Varona-Marin, D., Scott, S. D., & University of Waterloo. (2016). The lifecycle of a whiteboard photo: Post-meeting usage of whiteboard content captured with mobile devices.

Webber, K. H., Tate, D. F., & Quintiliani, L. M. (2008). Motivational interviewing in internet groups: a pilot study for weight loss. J Am Diet Assoc.., 108: 1029–1032.

Yon, B. A., Johnson, R. K., Harvey-Berino, J., Gold, B. C., & Howard, A. B. (2007). Personal digital assistants are comparable to traditional diaries for dietary self-monitoring during a weight loss program. Journal of Behavioral Medicine, 30: 165–175.

 

 

Bibliography

Automatic Detection of Dining Plates for Image-Based Dietary Evaluation: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3739713/

Dish Detection and Segmentation for Dietary Assessment on Mobile Phones: http://madima.org/madima2015/wpcontent/uploads/2015/10/Madima15_segmentation_final2.pdf

Food Image Analysis: Segmentation, Identification, and Weight Estimation: https://ieeexplore.ieee.org/document/6607548/

Lightweight Visual Data Analysis on Mobile Devices: Providing Self-Monitoring Feedback: https://kops.uni-konstanz.de/handle/123456789/39362

Mixed Reality Environments as Ecologies for Cross-Device Interaction: https://kops.uni-konstanz.de/handle/123456789/43028?locale-attribute=en

P.D. Leedy and Jeanne E. Ormond. “Practical Research: Planning and Design”. Pearson
Education, 2005. (Main library: FOLIO–001.42-LEE)

Rugg, G. “A Gentle Guide to Research Methods”. Open University Press, 2007. (Main
library: 378.242-RUG).

Snakes assisted food image segmentation: https://ieeexplore.ieee.org/document/6343437/

Specular Highlight Removal for Image-Based Dietary Assessment: https://ieeexplore.ieee.org/document/6266421/

Strunk, W. and White, E.B., “The Elements of Style”, Allyn and Bacon, 4th Edition,
2000. (Main library: 808-STR).

Swetnam, D. “Writing your dissertation: how to plan, prepare and present your work
successfully”, Oxford University Press, 3rd Edition, 2000. (Main library: 378.242-SWE).

 

 

Appendices

Appendix 1. Chart from The Visual Organization: Data Visualization, Big Data, and the Quest for Better Decisions on page 33

 

 

Appendix 2. Figure from Visual Analytics book by Daniel A. Keim

Appendix 3. Figure on building blocks of visual analytics

 

 

Appendix 4. Figures from Handbook of Data Visualization